I wrote both the WebDAV client (backend) for rclone and the WebDAV server. This means you can sync to and from WebDAV servers or mount them just fine. You can also expose your filesystem as a WebDAV server (or your S3 bucket or Google Drive etc).
The RFCs for WebDAV are better than those for FTP but there is still an awful lot of not fully specified stuff which servers and clients choose to do differently which leads to lots of workarounds.
The protocol doesn't let you set modification times by default which is important for a sync tool, but popular implementations like owncloud and nextcloud do. Likewise with hashes.
However the protocol is very fast, much faster than SFTP with it's homebrew packetisation as it's based on well optimised web tech, HTTP, TLS etc.
NFS is much slower, maybe unless you deploy it which RDMA. I believe even 4.2 doesn’t really support asynchronous calls or has some significant limitations around them - I’ve commonly seen a single large write of a few gigs starve all other operations including lstat for minutes.
Also it’s borderline impossible to tune nfs to go above 30gbps or so consistently, with WebDAV it’s a matter of adding a bunch more streams and you’re past 200gbps pretty easily.
> In fact, you're already using WebDAV and you just don't realize it.
Tailscale's drive share feature is implemented as a WebDAV share (connect to http://100.100.100.100:8080). You can also connect to Fastmail's file storage over WebDAV.
On the same topic, and because I believe too that WebDAV is not dead, far from it, I published a WIP lastly, part of a broader project, that is an nginx module that does WebDAV file server and is compatible with NextCloud sync clients, desktop & Android. It can be used with Gnome Online Accounts too, as well as with Nautilus (and probably others), as a WebDAV server.
if operating systems had just put a bit more time into the clients and not stopped any work in 2010 or so, webdav could have been much more, covering many usecases of fuse. unfortunately especially the mac webdav and finders outdated architecture make this just too painful
I built a simple WebDAV server with Sabre to sync Devonthink databases. WebDAV was the only option that synced between users of multiple iCloud accounts, worked anywhere in the world and didn’t require a Dropbox subscription. It’s a faster sync than CloudKit. I don’t have other WebDAV use cases but I expect this one to run without much maintenance or cost for years. Useful protocol.
"FTP is dead" - shared web hosting would like a word. Quite a few web hosts still talk about using FTP to upload websites to the hosting server. Yes, these days you can upload SSH keys and possibly use SFTP, but the docs still talk about tools like FileZilla and basic FTP.
They mention that the "FTP" service includes SFTP, which is file transfer over SSH (not actually related to classic FTP), which is perfectly secure and supported by most FTP clients like Filezilla.
The premium "SSH connection" you mentioned seems to refer to shell access via SSH, which is a separate thing.
They also support FTP without the SSH transport, and it's not FTPS either. Various IP cameras still support FTP as a way to write files out periodically; I use this to provide a "stream" from a camera (8 seconds per frame because reasons) to the world. Actual streaming via RTSP is also available, but I could never get a stable stream to a video host (like YT or Twitch) from the camera (partially because of a poor quality network connection that can't be upgraded easily). So, FTP + credentials -> walled off directory that's not under the web root -> PHP script in web root -> web browser.
Transport encryption should be a huge priority for everyone. It's completely unacceptable to continue using unencrypted protocols over the public internet.
Especially for the use case of transferring files to and from the backend of a web host. Not using it in that scenario is freely handing over control over your backend to everything in between you and the host, putting everyone at risk in the process.
Not true. Your hosting provider already has physical access to the computer you're connecting to.
Whether or not the connection you're using is encrypted doesn't really matter because the ISP and hosting provider are legally obligated to prevent unauthorized access.
(It's different if you're the NSA or some other state-level actor, but you're not.)
> It's completely unacceptable to continue using unencrypted protocols over the public internet.
That is nonsense. The reality is that most data simply is not sensitive, and there is no valid reason to encrypt it. I wouldn't use insecure FTP because credentials, but there's no good reason to encrypt your blog or something.
I'd argue that most people like knowing that what they receive is what the original server sent(and vice versa) but maybe you enjoy ads enough to prefer having your ISP put more of it on the websites you use?
Jokes aside https is as much about privacy as is is about reducing the chance you receive data that has been tampered. You shouldn't only not use FTP because credentials but also because embedded malware you didn't put there yourself.
Shared hosting is dying, but not yet dead; FTP is dying with it - it's really the last big use case for FTP now that software distribution and academia have moved away from FTP. As shared hosting continues to decline in popularity, FTP is going along with it.
Shared hosting is in decline in much the same way as it was in 2015. Aka everyone involved is still making money hand over fist despite continued reports of its death right around the corner.
The number of shared hosting providers has drastically declined since the 2000s. I would posit that things like squarespace/hosted wordpress took the lion share, with the advent of $5-10 VPS filling the remaining niches.
The remaining hosting companies certainly still make a lot of money, a shared hosting business is basically on autopilot once set up (I used to own one, hence why I still track the market) and they can be overcommitted like crazy.
No, not at all the case. There has been continued consolidation of the shared hosting space, plus consumer interest in "a website" has declined sharply now that small businesses just feel that they need an instagram to get started. Combine that with site builders eating at shared hosting's market share, and it's not looking good for the future of the "old school" shared hosting industry that you are thinking of.
Seems short sighted, a lot of older people and privacy conscious people of all ages avoid social media. But I guess if they are sustaining a business on only Instagram, good for them.
Recently set up WebDAV for my Paperless-NGX instance so my scanner can directly upload scans to Paperless. I wish Caddy would support WebDAV out of the box, had to use this extension: https://github.com/mholt/caddy-webdav
Author seems to conflate S3 API with S3 itself. Most vendors are now including S3 API compatibility into their product because people are so used to using that as a model
There really is nothing wrong with the S3 API and the complaints about Minio and S3 are basically irrelevant. It’s an API that dozens of solutions implement.
One interesting use of WebDAV is SysInternals (which is a collection of tools for Windows), it's accessible from Windows Explorer via WebDAV by going to \\live.sysinternals.com\Tools
I guess the "\\$HOSTNAME\$DIR" URL syntax in Windows Explorer also works for WebDAV. Is it safe to have SMB over WAN?
I just tried https://live.sysinternals.com/Tools in Windows Explorer, and it also lists the files, identical to how it would show the contents of any directory.
Even running "dir \\live.sysinternals.com\Tools", or starting a program from the command prompt like "\\live.sysinternals.com\Tools\tcpview64" works.
IIRC, Windows for a while had native WebDAV support in Explorer, but setting it up was very non-obvious. Not sure if it still does, since I've moved fully to Linux.
I was surprised, then not really surprised, when I found out this week that Tailscale's native file sharing feature, Taildrive, is implemented as a WebDAV server in the network.
> Lots of tools support it: [...| Windows Explorer (Map Network Drive, Connect to a Web site...)
Not sure he ever tried supporting that. We once did and it was a nightmare. People couldn't handle it at all even with screenshotted manuals.
My personal experience says that even the dumbest user is able to use FileZilla successfully, and therefore SFTP, while people just don't get the built-in WebDAV support of the OSes.
I also vaguely recall that WebDAV in Windows had quite a bit of randomly appearing problems and performance issues. But this was all a while ago, might have improved since then.
> While writing this article I came across an interesting project under development, Altmount. This would allow you to "mount" published content on Usenet and access it directly without downloading it... super interesting considering I can get multi-gigabit access to Usenet pretty easily.
For sure. I tried to setup a collaboration environment for a Customer years ago using WebDAV over SSL in lieu of Dropbox. Everything worked great (authenticating to Active Directory, NTFS ACLs, IP address restrictions in IIS policy where necessary, auditing access in Windows security log and IIS logs, no client to install), but the Windows client experience was hideously slow. People hated it for that and it got no traction.
Relatedly, is there a good way to expose a directory of files via the S3 API? I could only find alpha quality things like rclone serve s3 and things like garage which have their own on disk format rather than regular files.
I wonder how much better WebDAV must have gotten with newer versions of the HTTP stack. I only used it briefly in HTTP mode but found the clients to all be rather slow, barely using tricks like pipelining to make requests go a little faster.
It's a shame the protocol never found much use in commercial services. There would be little need for official clients running in compatibity layers like you see with tools like Gqdrive and OneDrive on Linux. Frankly, except for the lack of standardised random writes, the protocol is still one of the better solutions in this space.
I have no idea how S3 managed to win as the "standard" API for so many file storage solutions. WebDAV has always been right there.
It's HTTP, of course there's an extension for that?
Sabre-DAV's implementation seems to be relatively well implemented. It's supported in webdavfs for example. Here's some example headers one might attach to a PATCH request:
Another example is this expired draft. I don't love it, but it uses PATCH+Content-Range. There's some other neat ideas in here, and shows the versatility & open possibility (even if I don't love re-using this header this way). https://www.ietf.org/archive/id/draft-wright-http-patch-byte...
I'm using WebDAV to sync files from my phone to my NAS. There weren't any good alternatives, really. SMB is a non-starter on the public Internet (SMB-over-QUIC might change that eventually), SFTP is even crustier, rsync requires SSH to work.
Syncthing is great but it does file sync, not file sharing, so not ideal when you say want to share a big media library with your laptop but not necessarily load everything on it
This blog post didn't convince me. I must assume the default for most web devs in 2025 is hosting on a Linux VM and/or mounting the static files into a Docker container. SFTP is already there and Apache is too.
The last time I had to deal with WebDAV was for a crusty old CMS nobody liked using many years ago. The support on dev machines running Windows and Mac was a bit sketchy and would randomly have files skipped during bulk uploads. Linux support was a little better with davfs2, but then VSCode would sometimes refuse to recognize the mount without restarting.
None of that workflow made sense. It was hard to know what version of a file was uploaded and doing any manual file management just seemed silly. The project later moved to GitLab. A CI job now simply SFTPs files upon merge into the main branch. This is a much more familiar workflow to most web devs today and there's no weird jank.
And yet, I can never seem to find a decent java lib for webdav/caldav/carddav. Every time I look for one, I end up wanting to write my own instead. Then it just seems like the juice isn't worth the squeeze.
I wrote both the WebDAV client (backend) for rclone and the WebDAV server. This means you can sync to and from WebDAV servers or mount them just fine. You can also expose your filesystem as a WebDAV server (or your S3 bucket or Google Drive etc).
The RFCs for WebDAV are better than those for FTP but there is still an awful lot of not fully specified stuff which servers and clients choose to do differently which leads to lots of workarounds.
The protocol doesn't let you set modification times by default which is important for a sync tool, but popular implementations like owncloud and nextcloud do. Likewise with hashes.
However the protocol is very fast, much faster than SFTP with it's homebrew packetisation as it's based on well optimised web tech, HTTP, TLS etc.
I wonder how you would compare it to nfs (which I believe can be TCP based, and probably encrypted)
Not that it is a good comparison. NFS isn't super popular, macos can do it, I don't think windows can. But both windows and macos can do webdav.
NFS is much slower, maybe unless you deploy it which RDMA. I believe even 4.2 doesn’t really support asynchronous calls or has some significant limitations around them - I’ve commonly seen a single large write of a few gigs starve all other operations including lstat for minutes.
Also it’s borderline impossible to tune nfs to go above 30gbps or so consistently, with WebDAV it’s a matter of adding a bunch more streams and you’re past 200gbps pretty easily.
> In fact, you're already using WebDAV and you just don't realize it.
Tailscale's drive share feature is implemented as a WebDAV share (connect to http://100.100.100.100:8080). You can also connect to Fastmail's file storage over WebDAV.
WebDAV is neat.
I use it all the time to mount my CopyParty instance. Works great!
On the same topic, and because I believe too that WebDAV is not dead, far from it, I published a WIP lastly, part of a broader project, that is an nginx module that does WebDAV file server and is compatible with NextCloud sync clients, desktop & Android. It can be used with Gnome Online Accounts too, as well as with Nautilus (and probably others), as a WebDAV server.
Have a look there: https://codeberg.org/lunae/dav-next
/!\ it's a WIP, thus not packaged anywhere yet, no binary release, etc… but all feedback welcome
if operating systems had just put a bit more time into the clients and not stopped any work in 2010 or so, webdav could have been much more, covering many usecases of fuse. unfortunately especially the mac webdav and finders outdated architecture make this just too painful
I built a simple WebDAV server with Sabre to sync Devonthink databases. WebDAV was the only option that synced between users of multiple iCloud accounts, worked anywhere in the world and didn’t require a Dropbox subscription. It’s a faster sync than CloudKit. I don’t have other WebDAV use cases but I expect this one to run without much maintenance or cost for years. Useful protocol.
iOS DevonThink sync WebDAV has been reliable, fast, maintained, non-subscription and includes a web scraper. Good for saving LLM chatbot markdown.
"FTP is dead" - shared web hosting would like a word. Quite a few web hosts still talk about using FTP to upload websites to the hosting server. Yes, these days you can upload SSH keys and possibly use SFTP, but the docs still talk about tools like FileZilla and basic FTP.
Exhibit A: https://help.ovhcloud.com/csm/en-ie-web-hosting-ftp-storage-...
I haven't used old school FTP in probably 15 years. Surely we're not talking about using that unencrypted protocol in 2025?
From that link:
Well, maybe we are. I'd cross that provider off my list right there.They mention that the "FTP" service includes SFTP, which is file transfer over SSH (not actually related to classic FTP), which is perfectly secure and supported by most FTP clients like Filezilla.
The premium "SSH connection" you mentioned seems to refer to shell access via SSH, which is a separate thing.
They also support FTP without the SSH transport, and it's not FTPS either. Various IP cameras still support FTP as a way to write files out periodically; I use this to provide a "stream" from a camera (8 seconds per frame because reasons) to the world. Actual streaming via RTSP is also available, but I could never get a stable stream to a video host (like YT or Twitch) from the camera (partially because of a poor quality network connection that can't be upgraded easily). So, FTP + credentials -> walled off directory that's not under the web root -> PHP script in web root -> web browser.
FTP still works great and encryption is a non-priority for 100% of users.
It should be priority for hosting companies though since leaked credentials and websites hosting malware is a problem.
Transport encryption should be a huge priority for everyone. It's completely unacceptable to continue using unencrypted protocols over the public internet.
Especially for the use case of transferring files to and from the backend of a web host. Not using it in that scenario is freely handing over control over your backend to everything in between you and the host, putting everyone at risk in the process.
Not true. Your hosting provider already has physical access to the computer you're connecting to.
Whether or not the connection you're using is encrypted doesn't really matter because the ISP and hosting provider are legally obligated to prevent unauthorized access.
(It's different if you're the NSA or some other state-level actor, but you're not.)
> It's completely unacceptable to continue using unencrypted protocols over the public internet.
That is nonsense. The reality is that most data simply is not sensitive, and there is no valid reason to encrypt it. I wouldn't use insecure FTP because credentials, but there's no good reason to encrypt your blog or something.
I'd argue that most people like knowing that what they receive is what the original server sent(and vice versa) but maybe you enjoy ads enough to prefer having your ISP put more of it on the websites you use?
Jokes aside https is as much about privacy as is is about reducing the chance you receive data that has been tampered. You shouldn't only not use FTP because credentials but also because embedded malware you didn't put there yourself.
Agree but also wonder if ISPs bother with this anymore, now that almost all websites are https.
I, for one, would like to see an ISP dedicated enough and tecnically able to inject ads in my FTP stream. :)
Didn't we already go through this 10 years ago and then Firesheep got created and thoroughly debunked it?
Shared hosting is dying, but not yet dead; FTP is dying with it - it's really the last big use case for FTP now that software distribution and academia have moved away from FTP. As shared hosting continues to decline in popularity, FTP is going along with it.
Like you, I will miss the glory days of FTP :'(
I think the true death of ftp was amazon s3 deciding to use their own protocol instead of ftp, as s3 is basically the same niche.
FTP does not even come close to supporting the use cases of S3, especially now.
Shared hosting is in decline in much the same way as it was in 2015. Aka everyone involved is still making money hand over fist despite continued reports of its death right around the corner.
The number of shared hosting providers has drastically declined since the 2000s. I would posit that things like squarespace/hosted wordpress took the lion share, with the advent of $5-10 VPS filling the remaining niches.
The remaining hosting companies certainly still make a lot of money, a shared hosting business is basically on autopilot once set up (I used to own one, hence why I still track the market) and they can be overcommitted like crazy.
Source on the number of providers declining?
No, not at all the case. There has been continued consolidation of the shared hosting space, plus consumer interest in "a website" has declined sharply now that small businesses just feel that they need an instagram to get started. Combine that with site builders eating at shared hosting's market share, and it's not looking good for the future of the "old school" shared hosting industry that you are thinking of.
Seems short sighted, a lot of older people and privacy conscious people of all ages avoid social media. But I guess if they are sustaining a business on only Instagram, good for them.
I use webdav for serving media over tailscale to infuse when I'm on the move. SMB did not play nicely at all and nfs is not supported..
The go stdlib has quite a good one that just works with only a small bit of wrapping in a main() etc.
Although ive since written one in elixir that seems to handle my traffic better..
(you can also mount them on macos and browse with finder / shell etc which is pretty nice)
Recently set up WebDAV for my Paperless-NGX instance so my scanner can directly upload scans to Paperless. I wish Caddy would support WebDAV out of the box, had to use this extension: https://github.com/mholt/caddy-webdav
Which scanner, if you don’t mind me asking? I’ve got a decade+ old ix500 that had cloud support but not local SMB.
Author seems to conflate S3 API with S3 itself. Most vendors are now including S3 API compatibility into their product because people are so used to using that as a model
They do mention S3-compatible servers later in the post. It really seems to be about protocol itself.
More like attempt at S3 API compatibility...
I was about to make a very similar comment.
There really is nothing wrong with the S3 API and the complaints about Minio and S3 are basically irrelevant. It’s an API that dozens of solutions implement.
One interesting use of WebDAV is SysInternals (which is a collection of tools for Windows), it's accessible from Windows Explorer via WebDAV by going to \\live.sysinternals.com\Tools
Isn't that SMB, not webdav?
"\\server\share" is called a UNC path, which can be served by SMB, WebDAV or another type of server.
(old ref, but the architecture hasn't changed AFAIK)
Ref: https://learn.microsoft.com/en-us/previous-versions/windows/...
I guess the "\\$HOSTNAME\$DIR" URL syntax in Windows Explorer also works for WebDAV. Is it safe to have SMB over WAN?
I just tried https://live.sysinternals.com/Tools in Windows Explorer, and it also lists the files, identical to how it would show the contents of any directory.
Even running "dir \\live.sysinternals.com\Tools", or starting a program from the command prompt like "\\live.sysinternals.com\Tools\tcpview64" works.
IIRC, Windows for a while had native WebDAV support in Explorer, but setting it up was very non-obvious. Not sure if it still does, since I've moved fully to Linux.
If you need sftp independent of unix auth - there is sftpgo.
Sftpgo also supports webdav, but for use cases in the article sftp is just better.
I was surprised, then not really surprised, when I found out this week that Tailscale's native file sharing feature, Taildrive, is implemented as a WebDAV server in the network.
https://tailscale.com/kb/1369/taildrive
What else would you expect, just out of curiosity? SMB? NFS? SSHFS?
A proprietary binary patented protocol...
and do what, implement virtual filesystem driver for every OS ?
Only if adding that complexity locks in more subscribers for premium features and support.
> Lots of tools support it: [...| Windows Explorer (Map Network Drive, Connect to a Web site...)
Not sure he ever tried supporting that. We once did and it was a nightmare. People couldn't handle it at all even with screenshotted manuals.
My personal experience says that even the dumbest user is able to use FileZilla successfully, and therefore SFTP, while people just don't get the built-in WebDAV support of the OSes.
I also vaguely recall that WebDAV in Windows had quite a bit of randomly appearing problems and performance issues. But this was all a while ago, might have improved since then.
I feel the pain when you refeer to MinIO. I ended up using a pre 15 version in order to have all previous features but that sucks. I will try this.
> While writing this article I came across an interesting project under development, Altmount. This would allow you to "mount" published content on Usenet and access it directly without downloading it... super interesting considering I can get multi-gigabit access to Usenet pretty easily.
There is also NzbDav for this too, https://github.com/nzbdav-dev/nzbdav
Copyparty has webdav and smb support (among others), which makes it a good candidate to combine with a Kodi client perhaps?
OmniFocus also supports WebDAV for folks that prefer to self-host - https://support.omnigroup.com/documentation/omnifocus/univer...
Kudos to Omni Group for supporting open-standard on-prem sync.
FTP is not dead. A huge percent of Wind Turbines use FTP for data transfer.
Just like the author, I use WebDAV for Joplin, also Zotero. Just love them so much.
We need to keep using open protocols such as WebDAV instead of depending on proprietary APIs like the S3 API.
The Windows built-in WebDAV in explorer embarrassingly slow. Pretty much unusable for anything serious.
For sure. I tried to setup a collaboration environment for a Customer years ago using WebDAV over SSL in lieu of Dropbox. Everything worked great (authenticating to Active Directory, NTFS ACLs, IP address restrictions in IIS policy where necessary, auditing access in Windows security log and IIS logs, no client to install), but the Windows client experience was hideously slow. People hated it for that and it got no traction.
OTOH gio-based WebDAV access built into Nautilus and Thunar is something I use daily, and it works quite fine, for a FUSE-based filesystem.
Unlike NFS or SMB, WebDAV mounts do not get stuck for a minute when the connection becomes unstable.
Relatedly, is there a good way to expose a directory of files via the S3 API? I could only find alpha quality things like rclone serve s3 and things like garage which have their own on disk format rather than regular files.
consider versitygw or s3proxy
A lot of apps support WebDAV. It seems to be better supported than SFTP?
You can run a WebDAV server using caddy easily.
I wonder how much better WebDAV must have gotten with newer versions of the HTTP stack. I only used it briefly in HTTP mode but found the clients to all be rather slow, barely using tricks like pipelining to make requests go a little faster.
It's a shame the protocol never found much use in commercial services. There would be little need for official clients running in compatibity layers like you see with tools like Gqdrive and OneDrive on Linux. Frankly, except for the lack of standardised random writes, the protocol is still one of the better solutions in this space.
I have no idea how S3 managed to win as the "standard" API for so many file storage solutions. WebDAV has always been right there.
It has been 16 years since I started this webdav client for Java:
https://github.com/lookfirst/sardine
Still going.
Sardine is great. I recently used it to automate some backups from a webdav share. No complaints whatsoever :-)
JMAP will eventually replace WebDAV.
No random writes is the nail in the coffin for me
It's HTTP, of course there's an extension for that?
Sabre-DAV's implementation seems to be relatively well implemented. It's supported in webdavfs for example. Here's some example headers one might attach to a PATCH request:
https://sabre.io/dav/http-patch/ https://github.com/miquels/webdavfslAnother example is this expired draft. I don't love it, but it uses PATCH+Content-Range. There's some other neat ideas in here, and shows the versatility & open possibility (even if I don't love re-using this header this way). https://www.ietf.org/archive/id/draft-wright-http-patch-byte...
Apache has has a PUT with Content-Range, https://github.com/miquels/webdav-handler-rs/blob/master/doc...
Great write-up in rclone on trying to support partial updates, https://forum.rclone.org/t/support-putstream-for-webdav-serv...
It would be great to see a proper extension formalized here! But there are options.
> FTP is dead (yay),
Hahahaha, haha, ha, no. And probably (still)more used than WebDAV
pls send help
Yeah, that must have been wishful thinking.
FTP is such a clunky protocol, it is peculiar it has had such staying power.
I'm using WebDAV to sync files from my phone to my NAS. There weren't any good alternatives, really. SMB is a non-starter on the public Internet (SMB-over-QUIC might change that eventually), SFTP is even crustier, rsync requires SSH to work.
What else?
Syncthing is pretty nice for that sort of thing.
Syncthing is great but it does file sync, not file sharing, so not ideal when you say want to share a big media library with your laptop but not necessarily load everything on it
That moves the goalpost. The user I was replying to wanted sync and didn't seem to be using other functionality like that.
I have just tried to run their unofficial apps, but I couldn't make them work.
This blog post didn't convince me. I must assume the default for most web devs in 2025 is hosting on a Linux VM and/or mounting the static files into a Docker container. SFTP is already there and Apache is too.
The last time I had to deal with WebDAV was for a crusty old CMS nobody liked using many years ago. The support on dev machines running Windows and Mac was a bit sketchy and would randomly have files skipped during bulk uploads. Linux support was a little better with davfs2, but then VSCode would sometimes refuse to recognize the mount without restarting.
None of that workflow made sense. It was hard to know what version of a file was uploaded and doing any manual file management just seemed silly. The project later moved to GitLab. A CI job now simply SFTPs files upon merge into the main branch. This is a much more familiar workflow to most web devs today and there's no weird jank.
>It's broadly available as you can see
And yet, I can never seem to find a decent java lib for webdav/caldav/carddav. Every time I look for one, I end up wanting to write my own instead. Then it just seems like the juice isn't worth the squeeze.