Top
Best
New

Posted by candiddevmike 1 day ago

Many hells of WebDAV(candid.dev)
158 points | 83 comments
eddieroger 1 day ago|
I've been playing with a toy app that dabbles in the Cal/CardDAV space, and it blows my mind that for all the power latest generation languages have, the thing I keep coming back to is PHP-based Sabre/DAV. That's not to say PHP isn't modern now, but instead a reflection of my surprise that there doesn't appear to be any other library out there that does as good or nearly as good a job at DAV as that one, and that one is pretty darn old.

On a different point, I don't think the author's point about having to "also" inspect the headers is a fair critique of DAV - HTTP headers are to indicate a certain portion of the request/response, and the body a different one. I wish it was simpler, but I think it's an acceptable round peg in a round hole use of the tools.

candiddevmike 1 day ago||
Author here, I'd be more inclined to agree about the headers if they were consistent. For instance, why is only Allow and DAV part of the header (and all of their bizarre options) and not things like supported report set or privileges? It would be better to have all of this in the body somehow, especially Depth.
inferiorhuman 1 day ago||
I wrote a standalone CardDAV server ages ago and the biggest frustration for me was just how buggy the clients were. At some point I stopped self-hosting and moved on.
112233 1 day ago||
Mounting WebDAV -- if you are in a situation, where you have to do it (e.g. own^W^W^Wnextcloud) is such an adventure. Everything - mac, win, linux - supports WebDAV. You mount and it works! Then you notice HOW it works: files are downloaded in full before program can access them, some operations are super slow, some fail or time out, plaintext credentials in mysterious places...

I heard DeltaV is very advanced, and Subversion supported it. I'm afraid to ask.

hurflmurfl 1 day ago||
I'm using the nextcloud app on my android, and for my Linux systems I mount WebDAV using rclone, with VFS cache mode set to FULL. This way I can: 1. Have the file structure etc synced to local without downloading the files 2. Have it fetch files automatically when I try to read them. Also supports range requests, so if I want to play a video, it sort of streams it, no need to wait for download. 3. If a file has been accessed locally, it's going to be cached for a while, so even if I'm offline, I can still access the cached version without having to verify that it's the latest. If I'm online, then it will verify if it's the latest version.

Overall, this has worked great for me, but it did take me a while before I set it up correctly. Now I have a cache of files I use, and the rest of the stuff that I just keep there for backup or hogging purposes doesn't take disk space and stays in the cloud until I sync it.

sureglymop 22 hours ago||
Sine you are mounting and not syncing the files, what happens when you edit a file offline? And what if on another offline device the file is also edited?
hurflmurfl 12 hours ago||
Fair question. Conflicts happen, which I'm fine with.

Realistically speaking, most files I have in my cloud are read-only. The most common file that I read-write on multiple devices is my keepass file, which supports conflict resolution (by merging changes) in clients.

Also used to happen when I tried editing some markdown notes using obsidian on PC, and then using text editor (or maybe obsidian again?) on android, but I eventually sort of gave up on that use-case. Editing my notes from my phone is sort of inconvenient anyway, so I mostly just create new short notes that I can later edit into some larger note, but honestly can't remember the last time this happened.

But yes, if not careful, you could run into your laptop overwriting the file when it comes online. In my case, it doesn't really happen, and when it does, Nextcloud will have the "overwritten version" saved, so I can always check what was overwritten and manually merge the changes.

P.S. If anyone wants to set this up, here's my nixos config for the service, feel free to comment on it:

  # don't forget to run `rclone config` beforehand
  # to create the "nextcloud:" remote
  # some day I may do this declaratively, but not today
  systemd.services.rclone-nextcloud-mount = {
    # Ensure the service starts after the network is up
    wantedBy = [ "multi-user.target" ];
    after = [ "network-online.target" ];
    requires = [ "network-online.target" ];

    # Service configuration
    serviceConfig = let
      ncDir = "/home/username/nextcloud";
      mountOptions = "--vfs-cache-mode full --dir-cache-time 1w --vfs-cache-max-age 1w";
    in {
      Type = "simple";
      ExecStartPre = "/run/current-system/sw/bin/mkdir -p ${ncDir}"; # Creates folder if didn't exist
      ExecStart = "${pkgs.rclone}/bin/rclone mount ${mountOptions} nextcloud: ${ncDir}"; # Mounts
      ExecStop = "/run/current-system/sw/bin/fusermount -u ${ncDir}"; # Dismounts
      Restart = "on-failure";
      RestartSec = "10s";
      User = "username";
      Group = "users";
      Environment = [ "PATH=/run/wrappers/bin/:$PATH" ];
    };
  };
Fnoord 23 hours ago|||
> own^W^W^Wnextcloud

own^H^H^Hnextcloud

or

own^Wnextcloud

You might wanna look into OpenCloud (formerly known as nextcloud-go) [1]. I still use Nextcloud for the uploading of files and the calendar (though I may switch the latter), but I now sync the dir with Immich. Performance-wise a relief. I also swapped Airsonic Advanced (Java) with Navidrome (Go). Same story.

[1] https://github.com/opencloud-eu/opencloud

solarkraft 1 hour ago||
> but I now sync the dir with Immich

Do you use this for anything other than photos and videos?

blacklion 21 hours ago|||
Windows officialy removed support for WebDAV. It still works, but nothing is guaranteed. It has stupid limitation on file size of 10MB, it can be lifted to 2GB (max signed 32 bit number) in Registry, but it is still not very much in modern world (I wanted to share my medial library via WebDAV and failed due to this limitation). It lose credentials on regular basis, errors are too vague («Wrong credentials» means both mistyped password AND expired server certificate), etc.
bigfatkitten 14 hours ago||
It’s also a bit of a disaster from a security perspective.

https://www.thehacker.recipes/ad/movement/mitm-and-coerced-a...

n3storm 1 day ago|||
Subversion works ok over webdav, it has done it for decades.

Mounting a directory through nfs, smb or ssh and files are downloaded in full before program access them. What you mean? Listing a directory or accessing file properties, like size for example do not need full download.

112233 1 day ago|||
I am confused, what do you mean? What OS forces you to download whole file over NFS or SMB before serving read()? Even SFTP does support reading and writing at an offset.
n3storm 1 day ago|||
If I open a nfs doc with, let's say Libreoffice, will I not download whole file?

On a second thought, I think you are looking at webdav as sysadmins not as developers. Webdav was designed for document authoring, and you cannot author a document, version it, merge other authors changes, track changes without fully controlling resources. Conceptually is much like git needs a local copy.

I can't imagine how to have an editor editing a file and file is changed at any offset at any time by any unknown agent whitouth any type of orchestration.

rkeene2 1 day ago|||
If you open a file with LibreOffice will read the whole thing regardless of whether or not the file is on NFS or not.

The parent comment was stating that if you use the open(2) system call on a WebDAV mounted filesystem, which doesn't perform any read operation, the entire file will be downloaded locally before that system call completes. This is not true for NFS which has more granular access patterns using the READ operation (e.g., READ3) and file locking operations.

It may be the case that you're using an application that isn't LibreOffice on files that aren't as small as documents -- for example if you wanted to watch a video via a remote filesystem. If that filesystem is WebDAV (davfs2) then before the first piece of metadata can be displayed the entire file would be downloaded locally, versus if it was NFS each 4KiB (or whatever your block size is) chunk would be fetched independently.

theamk 3 hours ago||||
Libreoffice will likely download the whole file.

But many others clients won't. In particular, any video player will _not_ download entire file before accessing it. And for images, many viewers start showing image before whole thing is downloaded. And to look at zip files, you don't need the whole thing - just index at the end. And for music, you stream data...

Requiring that file is "downloaded in full before program access them" is a pretty bad degradation in a lot of cases. I've used smb and nfs and sshfs and they all let you read any range of file, and start giving the data immediately, even before the full download.

wbl 1 day ago|||
NFS infamously proxies reads and writes. Obviously there is some caching but that just makes the behavior funner.
shellac 1 day ago|||
Are you saying WebDAV doesn't support range requests?
112233 1 day ago|||
That's the beauty of working with WebDAV, also captured vividly in the above article -- any particular server/client combination feels no obligation to try and act like some "standards" prescribe, or make use of facilities available.

I might be wrong, but when I last mounted webdav from windows, it did the same dumb thing too.

blacklion 21 hours ago|||
WebDAV as standard? Supports. This particular combination of client and server? Who knows, good luck.
goodthink 17 hours ago|||
> Subversion works ok over webdav, it has done it for decades.

Thank you!!!!

jjkaczor 1 day ago|||
Actually - I believe - within Windows 11 - the "WebClient" service is now deprecated (which is what - IIRC, actually implements the WebDAV client protocol so that it works with Windows File Explorer, drive mappings, etc.)...

Played around with WebDAV alot... a long time ago... (Exchange Webstore/Webstorage System, STS/SharePoint early editions)...

heavyset_go 17 hours ago||
Regarding Linux, WebDAV has been partially working/broken in Dolphin/kio since Plasma 5 on KDE. I've found the davfs2 FUSE module to be more reliable.
112233 1 hour ago|||
Sibling comment mentioned rclone, which is enabling piece of software and much better at webdaw than davfs2
QuercusMax 14 hours ago|||
I just imagined implementing webdav as a kernel module and I think I just broke my brain
heavyset_go 6 hours ago||
Here's some prior art for your cursed journey https://github.com/sysprog21/khttpd
imclaren 21 hours ago||
I built a go caldav server and client for my task management app (http://calmtasks.com) and had a similar experience, which surprised me. Go generally has at least one good, working, and well documented implementation for all standard protocols.

Apple calendar supports caldav but in a way not specified in the spec. I basically had to send requests and responses to figure out how it works. I would be willing to open source my server and client (alot of which was built using/on top of existing libraries) if there is interest.

raybb 18 hours ago||
Why did you make a native app instead of PWA? Because of push notifications or just ease of development?

Also, would be nice to add some screenshots of the web UI.

Looks like a nice little app!

sdoering 21 hours ago||
I‘d be interested. A caldav server is still on my list.
nedt 4 hours ago||
I once implemented a WebDAV server in PHP. The standard isn't that bad and clients are more or less following the standard. It's still horrible how they are doing that. I saw behaviors when opening a single file like:

  - does / exists?
  - does /path/to exists?
  - does /path/to/file exists?
  - create a new file /path/to/file.lock
  - does /path/to/file.lock exist? 
  - does / exist?
  - does /path/to/file exists?
  - lock /path/to/file
  - get content of /path/to/file
  - unlock /path/to/file
  - does /path/to/file.lock exist? 
  - remove /path/to/file.lock
(if not exactly like that it was at least very close, that was either Finder on OS X or Explorer on Windows). Without some good caching mechanism it's hard to handle all of the load when you get multiple users.

Also the overwrite option was never used. You'd expect a client to copy a file, get and error if the target exists, ask user if it's ok, send same copy with overwrite flag set to true. In reality clients are doing all steps manually and delete the target before copying.

It was satisfying seeing it work at the end, but you really need to test all the clients in addition to just implementing the standard.

mickael-kerjean 22 hours ago||
Articles like this shitting on WebDAV really rubs me the wrong way as I've seen first hand discussion that goes like: "internet say WebDAV is hell, what's the better alternative? S3 or course!" And now every cloud provider instead of providing a webdav interface provide an S3 one and it's worse by every possible way, you can't rename a file / folder because S3 does not support that, you can't support a classic username / password authentication mode but are force to use an uggly access_key_id and secret_access_key, can't bash your way around with a simple curl command to do anything because generating the signature requires a proper programming language and you have to trust Amazon to do the right thing instead of going through the RFC process except they've already shown a few months ago their complete lack of care for any s3 compliant server by introducing a breaking change that literally broke the entire ecosystem of "S3 compliant" implementations overnight and without any prior warning.

I hope WebDAV had a better reputation, it carries the original promise of s3 of being actually simple but S3 won the war with evangelism. I would much have preferred a world where new version of the webdav protocol are made to address the quirks exactly like what happened with protocols like http, oauth, ...

kjellsbells 7 hours ago||
Postel's Law strikes again. What's the point of having RFCs with MUST and SHOULD if everyone does what they need? You end up with French cafe[0] implementations.

[0] https://www.samba.org/ftp/tridge/misc/french_cafe.txt

philipwhiuk 7 hours ago|
You're only required to do a MUST if you claim to implement the RFC at all. You're not required to implement an RFC.

The author's mention of a lawsuit for not following an RFC is insane.

thayne 1 day ago||
> Ah, looks like it was somewhat superseded by RFC 4918, but we’re not going to tell you which parts! How about those extension RFCs? There’s only 7 of them…

This is a major complaint I have with RFCs.

If you want to know the current standard for a protocol or format you often have to look at multiple RFCs. Some of them partially replace parts of a previous RFC, but it isn't entirely clear which parts. And the old RFCs don't link to the new ones.

There are no less than 11 RFCs for HTTP (including versions 2 and 3)

I really wish IETF published living standards that combined all relevant RFCs together in a single source of truth.

braiamp 1 day ago||
Is this true anymore? AFAIK, I've seen "Updated by" (rfc2119), "Obsoleted by" (rfc3501), but that might changed afterwards https://stackoverflow.com/a/39714048
marcosdumay 1 day ago||
Those notices don't usually point to all RFCs that update the one you are reading. They tend to be more complete on the case of obsolete ones.
mnot 23 hours ago||
https://httpwg.org/specs/
WhyNotHugo 19 hours ago||
When working on pimsync[1] and the underlying WebDAV/CalDAV/CardDAV implementation in libdav, I wrote "live tests" early on. These are integration tests, which use real servers (radicale, xandikos, nextcloud, cyrus, etc). They do things like "create an event, update the event, fetch it, validate it was updated". Some test handle exotic encoding edge cases, or trying to modify something this a bogus "If-Match" header. All these tests were extremely useful to validate the actual behaviour, in great deal because the RFCs are pretty complex and easy to misinterpret. For anyone working on the field, I strong suggest having extensive and easy to execute integration tests with multiple servers (or clients).

All servers have quirks, so each test is marked as "fails on xandikos" or "fails on nextcloud". There's a single test which fails on all the test servers (related to encoding). Trying to figure out why this test failed drove me absolute crazy, until I finally understood that all implementations were broken in the same subtle way. Even excluding that particular test, all server fail at least one other test. So each server is broken in some subtle way. Typically edge-cases, of course.

By far, however, the worst offender is Apple's implementation. It seems that their CalDAV server has a sort of "eventual consistency" model: you can create a calendar, and then query the list of calendars… and the response indicates that the calendar doesn't exist! It usually takes a few seconds for calendars to show up, but this makes automated testing an absolute nightmare.

[1]: https://pimsync.whynothugo.nl/

HexDecOctBin 10 hours ago|
Which server was the most compliant? I have been using Radicale for a while, but would like to know if that is not a good choice.
publicdebates 23 hours ago||
I once implemented JavaScript's new async-for in plain Objective-C for a WebDAV app that I wrote for a client, about 15 years ago. I was so much smarter back then than I am now. Does this happen to everyone? You just go downhill? Anyway I'm sure there were complex edge cases of WebDAV that I missed, but it worked really well in all my tests, and my client never complained about it.
kayodelycaon 16 hours ago|
For myself I don't think I was smarter before, I just paid less attention to what I was doing. I didn't know about all the edge cases. I hadn't built it before so I massively underestimated how much work it would be to get done. This makes it much easier to start.

What I did before with ignorance, I now do with experience. For projects which support it, I write tests first. Find the edge cases and figure out what I'm going to skip. I will know the scope of my project before I start it.

With solid tests in place, my productivity and confidence soars. And the implementation doesn't result in as many bugfixes than they didn't in the past.

This kind of improvement is hard to notice. You're looking at the end result of your previous work and your memory of working on it will be incomplete. Instead you're looking at what it would take for you to implement it now.

On top of all of this, do you have more responsibilities or think through your actions more than you did before? This sucks time and mental bandwidth. You have less opportunity to use your intelligence.

I had the same feeling before about a story I wrote. The stars aligned for me to write something truly excellent. For years I thought that it would be my best work. I've never been so relieved to hate something. I will always be proud of it but I no longer think it's the best I can do.

whizzter 1 day ago|
Actually done some WebDAV, did a small client (talking to Apache) from JS that worked well enough for my purposes.

The nasty surprise was doing the server-side (for a hobby-project), many layers. Luckily found out that something called DavTest exists (it's included with Debian) so testing most basic things wasn't too bad.

Then tried mounting from Windows and running into a bunch of small issues (iirc you need to support locking), got it to mount before noticing notes about a 50mb file-size limit by default (raisable.. but yeah).

It's a shame it's all such a fragmented hodge-podge because adding SMB (the only other "universal" protocol) to an application server is just way too much complexity.

More comments...