On a different point, I don't think the author's point about having to "also" inspect the headers is a fair critique of DAV - HTTP headers are to indicate a certain portion of the request/response, and the body a different one. I wish it was simpler, but I think it's an acceptable round peg in a round hole use of the tools.
I heard DeltaV is very advanced, and Subversion supported it. I'm afraid to ask.
Overall, this has worked great for me, but it did take me a while before I set it up correctly. Now I have a cache of files I use, and the rest of the stuff that I just keep there for backup or hogging purposes doesn't take disk space and stays in the cloud until I sync it.
Realistically speaking, most files I have in my cloud are read-only. The most common file that I read-write on multiple devices is my keepass file, which supports conflict resolution (by merging changes) in clients.
Also used to happen when I tried editing some markdown notes using obsidian on PC, and then using text editor (or maybe obsidian again?) on android, but I eventually sort of gave up on that use-case. Editing my notes from my phone is sort of inconvenient anyway, so I mostly just create new short notes that I can later edit into some larger note, but honestly can't remember the last time this happened.
But yes, if not careful, you could run into your laptop overwriting the file when it comes online. In my case, it doesn't really happen, and when it does, Nextcloud will have the "overwritten version" saved, so I can always check what was overwritten and manually merge the changes.
P.S. If anyone wants to set this up, here's my nixos config for the service, feel free to comment on it:
# don't forget to run `rclone config` beforehand
# to create the "nextcloud:" remote
# some day I may do this declaratively, but not today
systemd.services.rclone-nextcloud-mount = {
# Ensure the service starts after the network is up
wantedBy = [ "multi-user.target" ];
after = [ "network-online.target" ];
requires = [ "network-online.target" ];
# Service configuration
serviceConfig = let
ncDir = "/home/username/nextcloud";
mountOptions = "--vfs-cache-mode full --dir-cache-time 1w --vfs-cache-max-age 1w";
in {
Type = "simple";
ExecStartPre = "/run/current-system/sw/bin/mkdir -p ${ncDir}"; # Creates folder if didn't exist
ExecStart = "${pkgs.rclone}/bin/rclone mount ${mountOptions} nextcloud: ${ncDir}"; # Mounts
ExecStop = "/run/current-system/sw/bin/fusermount -u ${ncDir}"; # Dismounts
Restart = "on-failure";
RestartSec = "10s";
User = "username";
Group = "users";
Environment = [ "PATH=/run/wrappers/bin/:$PATH" ];
};
};own^H^H^Hnextcloud
or
own^Wnextcloud
You might wanna look into OpenCloud (formerly known as nextcloud-go) [1]. I still use Nextcloud for the uploading of files and the calendar (though I may switch the latter), but I now sync the dir with Immich. Performance-wise a relief. I also swapped Airsonic Advanced (Java) with Navidrome (Go). Same story.
Do you use this for anything other than photos and videos?
https://www.thehacker.recipes/ad/movement/mitm-and-coerced-a...
Mounting a directory through nfs, smb or ssh and files are downloaded in full before program access them. What you mean? Listing a directory or accessing file properties, like size for example do not need full download.
On a second thought, I think you are looking at webdav as sysadmins not as developers. Webdav was designed for document authoring, and you cannot author a document, version it, merge other authors changes, track changes without fully controlling resources. Conceptually is much like git needs a local copy.
I can't imagine how to have an editor editing a file and file is changed at any offset at any time by any unknown agent whitouth any type of orchestration.
The parent comment was stating that if you use the open(2) system call on a WebDAV mounted filesystem, which doesn't perform any read operation, the entire file will be downloaded locally before that system call completes. This is not true for NFS which has more granular access patterns using the READ operation (e.g., READ3) and file locking operations.
It may be the case that you're using an application that isn't LibreOffice on files that aren't as small as documents -- for example if you wanted to watch a video via a remote filesystem. If that filesystem is WebDAV (davfs2) then before the first piece of metadata can be displayed the entire file would be downloaded locally, versus if it was NFS each 4KiB (or whatever your block size is) chunk would be fetched independently.
But many others clients won't. In particular, any video player will _not_ download entire file before accessing it. And for images, many viewers start showing image before whole thing is downloaded. And to look at zip files, you don't need the whole thing - just index at the end. And for music, you stream data...
Requiring that file is "downloaded in full before program access them" is a pretty bad degradation in a lot of cases. I've used smb and nfs and sshfs and they all let you read any range of file, and start giving the data immediately, even before the full download.
I might be wrong, but when I last mounted webdav from windows, it did the same dumb thing too.
Thank you!!!!
Played around with WebDAV alot... a long time ago... (Exchange Webstore/Webstorage System, STS/SharePoint early editions)...
Apple calendar supports caldav but in a way not specified in the spec. I basically had to send requests and responses to figure out how it works. I would be willing to open source my server and client (alot of which was built using/on top of existing libraries) if there is interest.
Also, would be nice to add some screenshots of the web UI.
Looks like a nice little app!
- does / exists?
- does /path/to exists?
- does /path/to/file exists?
- create a new file /path/to/file.lock
- does /path/to/file.lock exist?
- does / exist?
- does /path/to/file exists?
- lock /path/to/file
- get content of /path/to/file
- unlock /path/to/file
- does /path/to/file.lock exist?
- remove /path/to/file.lock
(if not exactly like that it was at least very close, that was either Finder on OS X or Explorer on Windows).
Without some good caching mechanism it's hard to handle all of the load when you get multiple users.Also the overwrite option was never used. You'd expect a client to copy a file, get and error if the target exists, ask user if it's ok, send same copy with overwrite flag set to true. In reality clients are doing all steps manually and delete the target before copying.
It was satisfying seeing it work at the end, but you really need to test all the clients in addition to just implementing the standard.
I hope WebDAV had a better reputation, it carries the original promise of s3 of being actually simple but S3 won the war with evangelism. I would much have preferred a world where new version of the webdav protocol are made to address the quirks exactly like what happened with protocols like http, oauth, ...
The author's mention of a lawsuit for not following an RFC is insane.
This is a major complaint I have with RFCs.
If you want to know the current standard for a protocol or format you often have to look at multiple RFCs. Some of them partially replace parts of a previous RFC, but it isn't entirely clear which parts. And the old RFCs don't link to the new ones.
There are no less than 11 RFCs for HTTP (including versions 2 and 3)
I really wish IETF published living standards that combined all relevant RFCs together in a single source of truth.
All servers have quirks, so each test is marked as "fails on xandikos" or "fails on nextcloud". There's a single test which fails on all the test servers (related to encoding). Trying to figure out why this test failed drove me absolute crazy, until I finally understood that all implementations were broken in the same subtle way. Even excluding that particular test, all server fail at least one other test. So each server is broken in some subtle way. Typically edge-cases, of course.
By far, however, the worst offender is Apple's implementation. It seems that their CalDAV server has a sort of "eventual consistency" model: you can create a calendar, and then query the list of calendars… and the response indicates that the calendar doesn't exist! It usually takes a few seconds for calendars to show up, but this makes automated testing an absolute nightmare.
What I did before with ignorance, I now do with experience. For projects which support it, I write tests first. Find the edge cases and figure out what I'm going to skip. I will know the scope of my project before I start it.
With solid tests in place, my productivity and confidence soars. And the implementation doesn't result in as many bugfixes than they didn't in the past.
This kind of improvement is hard to notice. You're looking at the end result of your previous work and your memory of working on it will be incomplete. Instead you're looking at what it would take for you to implement it now.
On top of all of this, do you have more responsibilities or think through your actions more than you did before? This sucks time and mental bandwidth. You have less opportunity to use your intelligence.
I had the same feeling before about a story I wrote. The stars aligned for me to write something truly excellent. For years I thought that it would be my best work. I've never been so relieved to hate something. I will always be proud of it but I no longer think it's the best I can do.
The nasty surprise was doing the server-side (for a hobby-project), many layers. Luckily found out that something called DavTest exists (it's included with Debian) so testing most basic things wasn't too bad.
Then tried mounting from Windows and running into a bunch of small issues (iirc you need to support locking), got it to mount before noticing notes about a 50mb file-size limit by default (raisable.. but yeah).
It's a shame it's all such a fragmented hodge-podge because adding SMB (the only other "universal" protocol) to an application server is just way too much complexity.