Top
Best
New

Posted by rpgbr 20 hours ago

Why Nextcloud feels slow to use(ounapuu.ee)
414 points | 316 comments
palata 19 hours ago|
I would love to like Nextcloud, it's pretty great that it does exist. Just that makes it better than... well everything else I haven't found.

What frustrates me is that it looks like it works, but once in a while it breaks in a way that is pretty much irreparable (or at least not in a practical way).

I want to run an iOS/Android app that backs up images on my server. I tried the iOS app and when it works, it's cool. It's just that once in a while I get errors like "locked webdav" files and it never seems to recover, or sometimes it just stops synchronising and the only way to recover seems to be to restart the sync from zero. It will gladly upload 80GB of pictures "for nothing", discarding each one when it arrives on the server because it already exists (or so it seems, maybe it just overwrites everything).

The thing is that I want my family to use the app, so I can't access their phone for multiple hours every 2 weeks; it has to work reliably.

If it was just for backing up my photos... well I don't need Nextcloud for that.

Again, alternatives just don't seem to exist, where I can install an app on my parent's iOS and have it synchronise their photo gallery in the background. Except I guess iCloud, that is.

benhurmarcel 18 hours ago||
I stopped using Nextcloud when the iOS app lost data.

For some reason the app disconnected from my account in the background from time to time (annoying but didn't think it was critical). Once I pasted data on Nextcloud through the Files app integration, it didn't sync because it was disconnected and didn't say anything, and it lost the data.

xeromal 13 hours ago|||
Oof, sounds painful. It's hard to use anything when you can't trust its fundamentals.
ToucanLoucan 11 hours ago|||
I never had data outright vanish, but similar to the comment you replied to, it was just unreliable. I found Syncthing much more useful over the long haul. The last 3 times I've had to do anything with it were simply to manage having new machines replace old ones.

Syncthing sadly doesn't let you not download some folders or files, but I just moved those to other storage. It beats the Nextcloud headache.

lompad 18 hours ago|||
Recently people built a super-lightweigt alternative, named copyparty[0]. To me that looks like it does everything people tend to need without all the bloat.

[0]: https://github.com/9001/copyparty

nucleardog 18 hours ago|||
I think "people" deserves clarification: Almost the entire thing was written by a single person and with a _seriously_ impressive feature set. The launch video is well worth a quick watch: https://www.youtube.com/watch?v=15_-hgsX2V0&pp=ygUJY29weXBhc...

I don't say this to diminish anyone else's contribution or criticize the software, just to call out the absolutely herculean feat this one person accomplished.

flanbiscuit 4 hours ago|||
There was an HN discussion about it 3 months ago with responses from the author, in case anyone is interested: https://news.ycombinator.com/item?id=44711519
mouse-5346 13 hours ago|||
Yeah people there pretty much mean one dude. It's mine boggling how much that little program can do considering it had one dev.
tspng 13 hours ago||
Don't forget, "Lot of the code was written on a mobile phone using tmux and vim on a bus". That's crazy.
Imustaskforhelp 11 hours ago||
I have tried to run micro https://micro-editor.github.io/ on my phone but this is some other beast if someone is running tmux and vim on their phone

I have found that typing normally is really preferably on android and usually I didn't like having to press columns or ctrl or anything so as such since micro is really just such a great thing overall, it fit so perfectly that when I had that device, I was coding more basic python on my phone than I was on my pc

Although back then I was running alpine on UserLand and I learnt a lot trying to make that alpine vm of sorts to work with python as it basically refused to and I think I learnt a lot which I might have forgotten now but the solution was very hacky (maybe gcompat) and I liked it

chappi42 18 hours ago||||
This is not an alternative as it only covers files. Mind what is in the article: "I like what Nextcloud offers with its feature set and how easily it replaces a bunch of services under one roof (files, calendar, contacts, notes, to-do lists, photos etc.), but ".

For us Nextcloud AIO is the best thing under the sun. It works reasonably well for our small company (about 10 ppl) and saves us from Microsoft. I'm very grateful to the developers.

Hopefully they are able to act upon such findings or rewrite it with go :-). Mmh, if Berlin (Germany) wouldn't waste so much money in ill-advised ideology-driven and long-term state-destroying actions and "NGOs" they had enough money to fund 100s of such rewrites. Alas...

lachiflippi 17 hours ago|||
Why should Germany be wasting public money on a private company who keeps shoveling more and more restrictions on their open-source-washed "community" offering, and whose "enterprise" pricing comes in at twice* the price MS365 does for fewer features, worse integration, and with added costs for hosting, storage, and maintenance?

* or same, if excluding nextcloud talk, but then missing a chat feature

chappi42 16 hours ago|||
It makes a lot of sense for Germany to keep some independance from foreign proprietary cloud providers (Microsoft, Google); Money very well invested imo. It helps the local industry and data stays under German sovereignity.

I find your "open-source-washed" remark deplaced and quite deragoraty. Nextcloud is, imo, totally right to (try to) monetize. They have to, they must further improve the technical backbone to stay competitive with the big boys.

redrblackr 17 hours ago|||
Could you expand on what restrictions they have placed on the community version?
lachiflippi 17 hours ago||
At the very least their app store, which is pretty much required for OIDC, most 2FA methods, and some other features, stops working at 500 users. AFAIK you can still manually install addons, it's just the integration that's gone, though I'm not 100% sure. Same with their notification push service (which is apparently closed source?[0]), which wouldn't be as much of an issue if there were proper docs on how to stand up your own instance of that.

IIRC they also display a banner on the login screen to all users advertising the enterprise license, and start emailing enterprise ads to all admin users.

Their "fair use policy"[1] also includes some "and more" wording.

[0] https://github.com/nextcloud/notifications/issues/82

[1] https://nextcloud.com/fairusepolicy/

akoboldfrying 10 hours ago||
> their app store, which is pretty much required for OIDC, most 2FA methods, and some other features, stops working at 500 users

How dare they. I just want to share photos and calendar with the 502 people in my immediate family.

mynameisvlad 18 hours ago||||
There is no way it’s going to be completely rewritten from scratch in Go, and none of whatever Germany is or isn’t doing affects that in any way shape or form.
preya2k 12 hours ago||
Actually, it's already been done by the former Nextcloud fork/predecessor. OwnCloud shared a big percentage of the Nextcloud codebase, but they decided to rewrite everything under the name OCIS (OwnCloud Infinite Scale) a couple of years ago. Recently, OwnCloud got acquired by Kiteworks and it seemed like they got in a fight with most of the staff. So big parts of the team left to start "OpenCloud", which is a fork of OCIS and is now a great competitor to Nextcloud. It's much more stable and uses less resources, but it also does a lot less than Nextcloud (namely only File sharing so far. No Apps, no Groupware.)

https://github.com/opencloud-eu

hadlock 8 hours ago|||
Thanks for sharing this, I've been wanting to look at private cloud stuff but it was all written in PHP. It looks like OpenCloud is majority Go with some php and gherkin, which is a step in the right direction.
mynameisvlad 7 hours ago||||
OCIS does only a small part of why people deploy NextCloud. I have run it, it’s great, but it’s not a replacement for the full suite nor is it trying to be.
brendoelfrendo 7 hours ago|||
I have OpenCloud working on my home server, and it features integration with the Collabora suite of software for office apps. Draw.io is also already supported.
brnt 17 minutes ago||
They offer a Docker compose file that sets up Collabora for you, but I can't find anything info on other apps, let alone integration. Where can I see what they support?
preya2k 2 minutes ago||
There are no "Apps". It's not a universal App platform like Nextcloud. It's just file sharing (and optionally a Radicale calender server via Environment Variable but without UI). There's optional plugins to open vendor specific files right in the browser.
cbondurant 18 hours ago||||
It makes perfect sense to me that nextcloud is a good fit for a small company.

My biggest gripe with having used it for far longer than I should have was always that it expected far too much maintenance (4 month release cadence) to make sense for individual use.

Doing that kind of regular upkeep on a tool meant for a whole team of people is a far more reasonable cost-benefit analysis. Especially since it only needs one technically savvy person working behind the scenes, and is very intuitive and familiar on its front-end. Making for great savings overall.

TuningYourCode 15 hours ago||
Hetzner‘s storage share product line offers a managed Nextcloud instance. I‘m using them as I didn‘t want to care about updating it myself.

The only downside is you can‘t use apps/plugins which require additional local tools (e.g. ocrmypdf) but others can be used just fine.

Calling remotely hosted services works (e.g. if you have elasticsearch on an vps and setup the Nextcloud fulltext search app accordingly)

upboundspiral 16 hours ago|||
I think what you described is basically ownCloud Infinite Scale (ocis). I haven't tested it myself but it's something I've been considering. I run normal owncloud right now over nextcloud as it avoided a few hiccups that I had.
preya2k 12 hours ago||
OCIS seems to have lost most of their team. They now work on a fork called OpenCloud. https://github.com/opencloud-eu
seemaze 18 hours ago||||
I found copyparty to be too busy on the UI/UX side of things. I've settled on dufs[0], quick to deploy, fast to use use, and cross platform.

[0] https://github.com/sigoden/dufs

davidcollantes 18 hours ago||
Do you have a systemd for it, run it with Docker, or simply manually as needed? I find its simplicity perfect!
seemaze 17 hours ago||
I run it manually as needed. It's already packaged for both Alpine Linux and Homebrew which suits my ad-hoc needs wonderfully!
Dylan16807 15 hours ago||||
> everything people tend to need

> NOTE: full bidirectional sync, like what nextcloud and syncthing does, will never be supported! Only single-direction sync (server-to-client, or client-to-server) is possible with copyparty

Is sync not the primary use of nextcloud?

hebelehubele 2 hours ago||||
It's an amazing piece of software. If only the code & the configuration was readable. It's overly reliant on 2-3 letter abbreviations, which I'm sure has a system, but I haven't yet been able to decipher.
scrollop 17 hours ago||||
Copyparty looks amazing, wow

https://www.youtube.com/watch?v=15_-hgsX2V0

ryandrake 10 hours ago||
I watched the video, too, and while amazing, it's the poster child for feature creep. It starts out as a file server, and at some point in the demo it's playing transcoded media and editing markdown??

Really impressive, but I think I'll stick to NFS.

peanut-walrus 4 hours ago|||
Personally, the only thing I need is stable clients on both desktop and mobile with bidirectional sync. Copyparty seems really cool, but it explicitly does not do that.
wltr 1 hour ago||
Have you considered syncthing? There’s shiny new and super cool Sushi Train (or Sync Train by other name) app for iOS (I wish the author would make it a paid app, so much I like it!): https://github.com/pixelspark/sushitrain

Not affiliated, but a very happy user.

I mention iOS, because that was what I needed personally, as there was syncthing for Android since forever.

Larrikin 18 hours ago|||
For your specific use case of photos, Immich is the front runner and a much better experience. Sadly for the general Dropbox replacement I haven't found anything either.
nucleardog 18 hours ago|||
> Sadly for the general Dropbox replacement I haven't found anything either.

I had really good luck with Seafile[0]. It's not a full groupware solution, just primarily a really good file syncing/Dropbox solution.

Upsides are everything worked reliably for me, it was much faster, does chunk-level deduplication and some other things, has native apps for everything, is supported by rclone, has a fuse mount option, supports mounting as a "virtual drive" on Windows, supports publicly sharing files, shared "drives", end-to-end encryption, and practically everything else I'd want out of "file syncing solution".

The only thing I didn't like about it is that it stores all of your data as, essentially, opaque chunks on disk that are pieced together using the data in the database. This is how it achieves the performance, deduplication, and other things I _liked_. However it made me a little nervous that I would have a tough time extracting my data if anything went horribly wrong. I took backups. Nothing ever went horribly wrong over 4 or 5 years of running it. I only stopped because I shelved a lot of my self-hosting for a bit.

[0]: https://www.seafile.com/en/home/

Semaphor 18 hours ago|||
Yeah, went with that as well. It’s blazingly fast compared to NC.
oompydoompy74 15 hours ago||
Pretty sure that NextCloud uses Seafile behind the scenes unless I’m mistaken.
Semaphor 15 hours ago||
You are mistaken.
raphman 12 hours ago||||
I can confirm this. We have been using it for 10 years now in our research lab. No data loss so far. Performance is great. Integration with OnlyOffice works quite well (there were sync problems a few years ago - I think upgrading OnlyOffice solved this issue).
justinparus 16 hours ago|||
thanks for sharing. been looking for something like this for awhile
thuttinger 18 hours ago||||
For a general file sharing / storage solution there is also OpenCloud: https://opencloud.eu/de

It's what I want to try next. Written in go, it looks promising.

karamanolev 16 hours ago||
Too many Cloud things! OwnCloud, NextCloud, OpenCloud. There have* to be better names available...
63stack 18 hours ago||||
Look into syncthing for a dropbox replacement, have been using it for years, very satisfied.
troyvit 18 hours ago|||
Syncthing is under my "want to like" list but I gave up on it. I'm a one person show who just wants to sync a few dozen markdown files across a few laptops and a phone. Every time I'd run it I'd invariably end up with conflict files. It got to the point where I was spending more time merging diffs than writing. How it could do that with just one person running it I have no idea.
Oxodao 18 hours ago|||
That should not happen. I use it a lot and never had this issue, there maybe is something wrong about your setup.

A good idea is to have it on an always-on server and add your share as an encrypted one (like you set the password on all your apps but not on the server); this pretty much results in a dropbox-like experience since you have a central place to sync even when your other devices are not online

the_pwner224 15 hours ago||||
My Syncthing experience matches Oxodao's. Over years with >10k files / 100 gb, I've only ever had conflicts when I actually made conflicting simultaneous changes.

I use it on my phone (configured to only sync on WiFi), laptop (connected 99% of the time), and server (up 100% of the time).

The always-up server/laptop as a "master node" are probably key.

Joeri 17 hours ago||||
I had this when I had a windows system in the mix. Windows handles case differently in filenames than linux and macOS, and it caused conflicts.
Brian_K_White 16 hours ago|||
Same. I don't know why so many people like syncthing.
Imustaskforhelp 11 hours ago||
I don't think that there is some good alternative to open source syncthing ,the way syncthing just does syncing no

Let me know if you know of any alternative which have helped you but I haven't tried syncthing but I have heard good things about it overall so I feel like I like it already even if I haven't tried it I guess.

layer8 16 hours ago|||
If you just need a Dropbox replacement for file syncing, Nextcloud is fine if you use the native file system integrations and ignore the web and WebDAV interfaces.
guilamu 18 hours ago||||
I'd say Ente-photo is at least as good if not better than Immich.

https://github.com/ente-io/ente

omnimus 17 hours ago|||
I would say the opposite. Ente has one huge advantage and that it is e2ee so it's a must if you are hosting someone else photos. But if you are planning to run something on your server/NAS for yourself then Immich has many advantages (that often relate to the e2ee). For example... your files are still files on the disk so less worry about something unrecoverably breaking. And you can add external locations. With Ente it is just about backing up your phone photos. Immich works pretty well as camera photo organizer.
dangus 16 hours ago||
The Ente desktop app has a continuous export function that’ll just dump everything into plain file directories.

It makes a little more sense when you’re using their cloud version, because otherwise you’re storing the data twice.

palata 15 hours ago||||
Does it have a mobile app that backs up the photos while in the background and can essentially be "forgotten"? That's pretty much what I need for my family: their photos need to get to my server magically.
omnimus 9 hours ago||
Both Ente and Immich have that.
fauigerzigerk 16 hours ago|||
I'm a very happy Ente Photos user as well.
redrblackr 17 hours ago||||
There is also "memories for nextcloud" which basically matches immich in feature set (was ahead until last month), nextcloud+memories make a very strong replacement for gdrive or dropbox
palata 15 hours ago||
Yeah I guess my issue is that if I can't trust the mobile app not to lose my photos (or stop syncing, or not sync everything), then I just can't use it at all. There is no point in having Nextcloud AND iCloud just because I don't trust Nextcloud :D.
noname120 9 hours ago||
Nextcloud mobile app is crap but fortunately it’s just WebDAV so you can use any other WebDAV app for synchronization.
palata 9 hours ago||
That's a good point! Are there good WebDAV apps synchronising, say the Photo gallery on iOS, transparently and always in the background?
noname120 5 minutes ago||
Unfortunately Apple puts extremely strict restrictions on background tasks so you will never have something as seamless as native iCloud or the amazing Android FolderSync app that I used for realtime synchronization for several years without a single issue.

I know people work around these iOS limitations by setting up springboard widgets that piggyback on background refresh tasks to do uploads. People also create Automator actions (e.g. run every day at time or location based) in the Shortcuts app.

I haven’t tried it but a popular option on iOS seems to be: https://apps.apple.com/app/photosync-transfer-photos/id41585...

treve 18 hours ago||||
I replaced all my Dropbox uses with SyncThing (and love it). I run an instance on my server at all times and on every client.
BLKNSLVR 10 hours ago||
+1 for SyncThing

I have it installed on my immediate family's devices to ensure all the photos are auto-backed-up to our NAS (which is then backed up to another NAS).

I need to check to make sure it's still working once in a while (every couple of months), but it's usually fine, and even if it's somehow stopped working, getting it running again catches itself up to where it should have been anyway.

palata 15 hours ago||||
Does its iOS/Android app automatically backup the photos in the background? When I looked into Immich (didn't try it) it sounded like it was more of a server thing. I need the automation so that my family can forget about it.
conradev 17 hours ago||||
I use Syncthing as a Dropbox replacement, and I like it. I have a machine at home running it that is accessible over the net. Not the prettiest, but it works!
cortesoft 17 hours ago||||
I love immich, too, but I have also ran into a lot of issues with syncing large libraries. The iPhone app will just hang sometimes.
eptcyka 2 hours ago|||
Since the last major update to 2.0, it has gotten immensely better. Whereas before the app was hung for 30 seconds on startup and would only reliably sync in the foreground for my partner, it now just works. Can open, syncs in the background. Never had such issues on my phone, probably the size of your collection matters here.
palata 15 hours ago|||
Does it recover though, or do you end up in situations where your setup is essentially broken?

Like if I backup photos from iOS, then remove a subset of those from iOS to make space on the phone (but obviously I want to keep them on the cloud), and later the mobile app gets out of sync, I don't want to end up in a situation where some photos are on iOS, some on the cloud, but none of the devices has everything, and I have no easy way to resync them.

cortesoft 14 hours ago|||
It won't recover unless I do something... sometimes just quitting the iPhone app and then toggling enabling backups works, but not always. I had to completely delete and reinstall the app once to get it to work, and had to resync all 45000 images/videos I had.

I have had the server itself fail in strange ways where I had to restart it. I had to do a full fresh install once when it got hopelessly confused and I was getting database errors saying records either existed when they shouldn't or didn't exist when they should.

I think I am a pretty skilled sysadmin for these types of things, having both designed and administered very large distributed systems for two decades now, but maybe I am doing things wrong, but I think there are just some gotchas still with the project.

palata 14 hours ago||
Right, that's the kind of issues I am concerned about.

iCloud / Google Photos just don't have that, they really never lose a photo. It's very difficult for me to convince my family to move to something that may lose their data, when iCloud / Google Photos works and is really not that expensive.

cortesoft 14 hours ago||
It has gotten more stable as I have used it for a while. I think if you want to do it, just wait until it is stable and you have a good backup routine before relying on it.
localtoast 13 hours ago|||
I have found adding the following four lines to the immich proxy host in nginx proxy manager (advanced tab) solved my immich syncing issues:

client_max_body_size 50000M;

proxy_read_timeout 600s;

proxy_send_timeout 600s;

send_timeout 600s;

FWIW, my library is about 22000 items large. Hope this helps someone.

jaden 17 hours ago||||
I too have found Syncthing + Filebrowser to be a sufficient substitute for Dropbox.
Handy-Man 18 hours ago||||
Have you looked into https://filebrowser.org/? While it's not drop-in replacement for Google Drive/Dropbox, it has been serving me well for similar quick usecase.
fukka42 17 hours ago|||
[dead]
jjav 5 hours ago|||
Nextcloud is great, but I don't use it for backup (didn't realize it would even do that) so maybe that's why.

I use it for a family cloud service for chat, shared todo lists, shared calendar and shared editing docs (don't want to put anything private on e.g. google docs).

For all that, it's full of awesome.

stavros 15 hours ago|||
For photos, you can't beat Immich.
pjs_ 18 hours ago|||
I’ve tried every scheme under the sun and Immich is the only thing I’ve ever seen that actually works for this use case
jacomoRodriguez 14 hours ago|||
I switch to FolderSync for the upload from mobile. Works like a charm!

I know, it sucks that the official apps are buggy as hell, but the server side is real solid

nolan879 15 hours ago|||
This also happened to me with my nextcloud, thankfully I did not lose any photos. I transitioned to Immich for my photos and have not looked back.
exe34 18 hours ago|||
I use syncthing, I've got a folder shared between my phone, laptop and media center, and it just syncs everything easily.
dns_snek 14 hours ago|||
It works well for smaller folders but it slows down to a crawl with folders that contain thousands of files. If I add a file to an empty shared folder it will sync almost instantly but if I take a photo both sides become aware of the change rather quickly but then they just sit around for 5 minutes doing nothing before starting the transfer.
exe34 14 hours ago||
how many thousands? I have a folder with a total of 12760 files spread within several folders, but the largest I think is the one with 3827 files.

I've noticed the sync isn't instantaneous, but if I ping one device from the other, it starts immediately. I think Android has some kind of network related sleep somewhere, since the two nixos ones just sync immediately.

dns_snek 11 hours ago||
I have around 4000 photos and videos in this folder. I don't know what it is but I know that it's not a network issue.

I think it takes a long time because the phone CPU is much slower than the desktop but I couldn't tell you what it's doing, the status doesn't say anything useful except noting that files are out of sync and that the other device is connected.

exe34 2 hours ago||
yes I do wish it would say a bit more of what's going on and have a big button that says "try it now".
kelvinjps10 18 hours ago|||
I do the same it's so convenient
pdntspa 16 hours ago|||
SyncThing
dade_ 19 hours ago||
The next cloud android app is particularly bad if you use it to back up your cameras DCIM directory then you delete the photos on your phone. It overwrite the files on Nextcloud as new photos are taken. I get why this happened but it is terrible.
branon 7 hours ago|||
Will this also happen if you let the Nextcloud app rename the files as it uploads them? I usually take that option and haven't had an issue with this although I don't have it set to delete from my phone after uploading.
Yie1cho 19 hours ago|||
it's bad for everything.

i have lots of txt files on my phone which are just not synced up to my server (the files on the server are 0 byte long).

i'm using txt files to take notes because the Notes app never worked for me (I get sync errors on any android phone while it works on iphone).

PaulKeeble 18 hours ago||
I don't doubt that large amounts of javascript can often cause issues but even when cached NextCloud feels sluggish. When I look at just the network tab of a refresh of the calendar page it does 124 network calls, 31 of which aren't cached. it seems to be making a call per calendar each of which is over 30ms. So that stacks up the more calendars you have(and you have a number by default like contact birthdays).

The Javascript performance trace shows over 50% of the work is in making the asynchronous calls to pull those calendars and other network calls one by one and then on all the refresh updates it causes putting them onto the page.

Supporting all these N calendar calls is pulls individually for calendar rooms and calendar resources and "principles" for the user. All separate individual network calls some of which must be gating the later individual calendar calls.

Its not just that, it also makes a call for notifications, groups, user status and multiple heartbeats to complete the page as well, all before it tries to get the calendar details.

This is why I think it feels slow, its pulling down the page and then the javascript is pulling down all the bits of data for everything on the screen with individual calls, waiting for the responses before it can progress in many ways to make the further calls of which there can be N many depending on what the user is doing.

So across the local network (2.5Gbps) that is a second and most of it in waiting for the network. If I use the regular 4G level of throttling it takes 33.10 seconds! Really goes to show how bad this design does with extra latency.

riskable 18 hours ago||
I was going to say... The size of the JS only matters the first time you download it unless there's a lot of tiny files instead of a bundle or two. What the article is complaining about doesn't seem like it's root cause of the slowness.

When it comes to JS optimization in the browser there's usually a few great big smoking guns:

    1. Tons of tiny files: Bundle them! Big bundle > zillions of lazy-loaded files.
    2. Lots of AJAX requests: We have WebSockets for a reason!
    3. Race conditions: Fix your bugs :shrug:
    4. Too many JS-driven animations: Use CSS or JS that just manipulates CSS.
Nextcloud appears to be slow because of #2. Both #1 and #2 are dependent on round-trip times (HTTP request to server -> HTTP response to client) which are the biggest cause of slowness on mobile networks (e.g. 5G).

Modern mobile network connections have plenty of bandwidth to deliver great big files/streams but they're still super slow when it comes to round-trip times. Knowing this, it makes perfect sense that Nextcloud would be slow AF on mobile networks because it follows the REST philosophy.

My controversial take: GIVE REST A REST already! WebSockets are vastly superior and they've been around for FIFTEEN YEARS now. Do I understand why they're so much lower latency than REST calls on mobile networks? Not really: In theory, it's still a round-trip but for some reason an open connection can pass data through an order of magnitude (or more) lower latency on something like a 5G connection.

amluto 30 minutes ago|||
Why WebSockets? If you need to fetch 30 things, you can build an elaborate protocol to stream them in without them interfering with each other, or you can ask for all thirty at once. Plain HTTP(S) can do the latter just fine, although the API might not be quite RESTful.
fwlr 17 hours ago||||
15MB of JavaScript is 15MB of code that your browser is trying to execute. It’s the same principle as “compiling a million lines of code takes a lot longer than compiling a thousand lines”.
riskable 17 hours ago||
It's a lot more complicated than that. If I have a 15MB .js file and it's just a collection of functions that get called on-demand (later), that's going to have a very, very low overhead because modern JS engines JIT compile on-the-fly (as functions get used) with optimization happening for "hot" stuff (even later).

If there's 15MB of JS that gets run immediately after page load, that's a different story. Especially if there's lots of nested calls. Ever drill down deep into a series of function calls inside the performance report for the JS on a web page? The more layers of nesting you have, the greater the overhead.

DRY as a concept is great from a code readability standpoint but it's not ideal performance when it comes to things like JS execution (haha). I'm actually disappointed that modern bundlers don't normally inline calls at the JS layer. IMHO, they rely too much on the JIT to optimize hot call sites when that could've been done by the bundler. Instead, bundlers tend to optimize for file size which is becoming less and less of a concern as bandwidth has far outpaced JS bundle sizes.

The entire JS ecosystem is a giant mess of "tiny package does one thing well" that is dependent on n layers of "other tiny package does one thing well." This results in LOADS of unnecessary nesting when the "tiny package that does one thing well" could've just written their own implementation of that simple thing it relies on.

Don't think of it from the perspective of, "tree shaking is supposed to take care of that." Think of it from the perspective of, "tree shaking is only going to remove dead/duplicated code to save file sizes." It's not going to take that 10-line function that handles with <whatever> and put that logic right where its used (in order to shorten the call tree).

Joeri 16 hours ago||
That 15mb still needs to be parsed on every page load, even if it runs in interpreted mode. And on low end devices there’s very little cache, so the working set is likely to be far bigger than available cache, which causes performance to crater.
riskable 16 hours ago||
Ah, that's the thing: "on page load". A one-time expense! If you're using modern page routing, "loading a new URL" isn't actually loading a new page... The client is just simulating it via your router/framework by updating the page URL and adding an entry to the history.

Also, 15MB of JS is nothing on modern "low end devices". Even an old, $5 Raspberry Pi 2 won't flinch at that and anything slower than that... isn't my problem! Haha =)

There comes a point where supporting 10yo devices isn't worth it when what you're offering/"selling" is the latest & greatest technology.

It shouldn't be, "this is why we can't have nice things!" It should be, "this is why YOU can't have nice things!"

snovv_crash 16 hours ago|||
When you write code with this mentality it makes my modern CPU with 16 cores at 4HGz and 64GB of RAM feel like a Pentium 3 running at 900MHz with 512MB of RAM.

Please don't.

binary132 15 hours ago||
THANK YOU
fluoridation 14 hours ago|||
>There comes a point where supporting 10yo devices isn't worth it

Ten years isn't what it used to be in terms of hardware performance. Hell, even back in 2015 you could probably still make do with a computer from 2005 (although it might have been on its last legs). If your software doesn't run properly (or at all) on ten-year-old hardware, it's likely people on five-year-old hardware, or with a lower budget, are getting a pretty shitty experience.

I'll agree that resources are finite and there's a point beyond which further optimizations are not worthwhile from a business sense, but where that point lies should be considered carefully, not picked arbitrarily and the consequences casually handwaved with an "eh, not my problem".

fluoridation 17 hours ago||||
>Do I understand why they're so much lower latency than REST calls on mobile networks? Not really: In theory, it's still a round-trip but for some reason an open connection can pass data through an order of magnitude (or more) lower latency on something like a 5G connection.

It's because a TLS handshake takes more than one roundtrip to complete. Keeping the connection open means the handshake needs to be done only once, instead of over and over again.

binary132 15 hours ago|||
doesn’t HTTP keep connections open?
fluoridation 14 hours ago||
It's up to the client to do that. I'm merely explaining why someone would see a latency improvement switching from HTTPS to websockets. If there's no latency improvement then yes, the client is keeping the connection alive between requests.
riskable 17 hours ago|||
Yes and no: There's still a rather large latency improvement even when you're using plain HTTP (not that you should go without encryption).

I was very curious so I asked AI to explain why websockets would have such lower latency than regular HTTP and it gave some (uncited, but logical) reasons:

Once a WebSocket is open, each message avoids several sources of delay that an HTTP request can hit—especially on mobile. The big wins are skipping connection setup and radio wakeups, not shaving a few header bytes.

Why WebSocket “ping/pong” often beats HTTP GET /ping on mobile

    No connection setup on the hot path
        HTTP (worst case): DNS + TCP 3‑way handshake + TLS handshake (HTTPS) before you can send the request. On mobile RTTs (60–200+ ms), that’s 1–3 extra RTTs, i.e., 100–500+ ms just to get started.
        HTTP with keep‑alive/H2/H3: Better (no new TCP/TLS), but pools can be empty or closed by OS/radios/idle timers, so you still pay setup sometimes.
        WebSocket: You pay the TCP+TLS+Upgrade once. After that, a ping is just one round trip on an already‑open connection.


    Mobile radio state promotions
        Cellular modems drop to low‑power states when idle. A fresh HTTP request can force an RRC “promotion” from idle to connected, adding tens to hundreds of ms.
        A long‑lived WebSocket with periodic keepalives tends to keep the radio in a faster state or makes promotion more likely to already be done, so your message departs immediately.
        Trade‑off: keeping the radio “warm” costs battery; most realtime apps tune keepalive intervals to balance latency vs power.


    Fewer app/stack layers per message
        HTTP request path: request line + headers (often cookies, auth), routing/middleware, logging, etc. Even with HTTP/2 header compression, the server still parses and runs more machinery.
        WebSocket after upgrade: tiny frame parsing (client→server frames are 2‑byte header + 4‑byte mask + payload), often handled in a lightweight event loop. Much less per‑message work.
         

    No extra round trips from CORS preflight
        A simple GET usually avoids preflight, but if you add non‑safelisted headers (e.g., Authorization) the browser will first send an OPTIONS request. That’s an extra RTT before your GET.
        WebSocket doesn’t use CORS preflights; the Upgrade carries an Origin header that servers can validate.


    Warm path effects
        Persistent connections retain congestion window and NAT/firewall state, reducing first‑packet delays and occasional SYN drops that new HTTP connections can encounter on mobile networks.

What about encryption (HTTPS/WSS)?

    Handshake cost: TLS adds 1–2 RTTs (TLS 1.3 is 1‑RTT; 0‑RTT is possible but niche). If you open and close HTTP connections frequently, you keep paying this. A WebSocket pays it once, then amortizes it over many messages.
    After the connection is up, the per‑message crypto cost is small compared to network RTT; the latency advantage mainly comes from avoiding repeated handshakes.
     
How much do headers/bytes matter?

    For tiny messages, both HTTP and WS fit in one MTU. The few hundred extra bytes of HTTP headers rarely change latency meaningfully on mobile; the dominant factor is extra round trips (connection setup, preflight) and radio state.
     
When the gap narrows

    If your HTTP requests reuse an existing HTTP/2 or HTTP/3 connection, have no preflight, and the radio is already in a connected state, a minimal GET /ping and a WS ping/pong both take roughly one network RTT. In that best case, latencies can be similar.
    In real mobile conditions, the chances of hitting at least one of the slow paths above are high, so WebSocket usually looks faster and more consistent.
fluoridation 16 hours ago||
Wow. Talk about inefficiency. It just said the same thing I did, but using twenty times as many characters.

>Yes and no: There's still a rather large latency improvement even when you're using plain HTTP (not that you should go without encryption).

Of course. An unencrypted HTTP request takes a single roundtrip to complete. The client sends the request and receives the response. The only additional cost is to set up the connection, which is also saved when the connection is kept open with a websocket.

cloudfudge 15 hours ago||
Yes and no. Have you considered that the problem is that a TLS handshake takes more than one round trip to complete?

/s

Yokolos 17 hours ago||||
I've never seen anybody recommend WebSockets instead of REST. I take it this isn't a widely recommended solution? Do you mean specifically for mobile clients only?
DecoPerson 16 hours ago|||
WebSockets are the secret ingredient to amazing low- to medium-user-count software. If you practice using them enough and build a few abstractions over them, you can produce incredible “live” features that REST-designs struggle with.

Having used WebSockets a lot, I’ve realised that it’s not the simple fact that WebSockets are duplex or that it’s more efficient than using HTTP long-polling or SSEs or something else… No, the real benefit is that once you have a “socket” object in your hands, and this object lives beyond the normal “request->response” lifecycle, you realise that your users DESERVE a persistent presence on your server.

You start letting your route handlers run longer, so that you can send the result of an action, rather than telling the user to “refresh the page” with a 5-second refresh timer.

You start connecting events/pubsub messages to your users and forwarding relevant updates over the socket you already hold. (Trying to build a delta update system for polling is complicated enough that the developers of most bespoke business software I’ve seen do not go to the effort of building such things… But with WebSockets it’s easy, as you just subscribe before starting the initial DB query and send all broadcasted updates events for your set of objects on the fly.)

You start wanting to output the progress of a route handler to the user as it happens (“Fetching payroll details…”, “Fetching timesheets…”, “Correlating timesheets and clock in/out data…”, “Making payments…”).

Suddenly, as a developer, you can get live debug log output IN THE UI as it happens. This is amazing.

AND THEN YOU WANT TO CANCEL SOMETHING because you realise you accidentally put in the actual payroll system API key. And that gets you thinking… can I add a cancel button in the UI?

Yes, you can! Just make a ‘ctx.progress()’ method. When called, if the user has cancelled the current RPC, then throw a RPCCancelled error that’s caught by the route handling system. There’s an optional first argument for a progress message to the end user. Maybe add a “no-cancel” flag too for critical sections.

And then you think about live collaboration for a bit… that’s a fun rabbit hole to dive down. I usually just do “this is locked for editing” or check the per-document incrementing version number and say “someone else edited this before you started editing, your changes will be lost — please reload”. Figma cracked live collaboration, but it was very difficult based on what they’ve shared on their blog.

And then… one day… the big one hits… where you have a multistep process and you want Y/N confirmation from the user or some other kind of selection. The sockets are duplex! You can send a message BACK to the RPC client, and have it handled by the initiating code! You just need to make it so devs can add event listeners on the RPC call handle on the client! Then, your server-side route handler can just “await” a response! No need to break up the handler into multiple functions. No need to pack state into the DB for resumability. Just await (and make sure the Promise is rejected if the RPC is cancelled).

If you have a very complex UI page with live-updating pieces, and you want parts of it to be filterable or searchable… This is when you add “nested RPCs”. And if the parent RPC is cancelled (because the user closes that tab, or navigates away, or such) then that RPC and all of its children RPCs are cancelled. The server-side route handler is a function closure, that holds a bunch of state that can be used by any of the sub-RPC handlers (they can be added with ‘ctx.addSubMethod’ or such).

The end result is: while building out any feature of any “non-web-scale” app, you can easily add levels of polish that are simply too annoying to obtain when stuck in a REST point of view. Sure, it’s possible to do the same thing there, but you’ll get frustrated (and so development of such features will not be prioritised). Also, perf-wise, REST is good for “web scale” / high-user-counts, but you will hit weird latency issues if you try to use for live, duplex comms.

WebSockets (and soon HTTP3 transport API) are game-changing. I highly recommend trying some of these things.

tyre 14 hours ago||
Find someone to love you the way DecoPerson loves websockets.
riskable 16 hours ago|||
After all my years of web development, my rules are thus:

    * If the browser has an optimal path for it, use HTTP (e.g. images where it caches them automatically or file uploads where you get a "free" progress API).
    * If I know my end users will be behind some shitty firewall that can't handle WebSockets (like we're still living in the early 2010s), use HTTP.
    * Requests will be rare (per client):  Use HTTP.
    * For all else, use WebSockets.
WebSockets are just too awesome! You can use a simple event dispatcher for both the frontend and the backend to handle any given request/response and it makes the code sooooo much simpler than REST. Example:

    WSDispatcher.on("pong", pongFunc);
...and `WSDispatcher` would be the (singleton) object that holds the WebSocket connection and has `on()`, `off()`, and `dispatch()` functions. When the server sends a message like `{"type": "pong", "payload": "<some timestamp>"}`, the client calls `WSDispatcher.dispatch("pong", "<some timestamp>")` which results in `pongFunc("<some timestamp>")` being called.

It makes reasoning about your API so simple and human-readable! It's also highly performant and fully async. With a bit of Promise wrapping, you can even make it behave like a synchronous call in your code which keeps the logic nice and concise.

In my latest pet project (collaborative editor) I've got the WebSocket API using a strict "call"/"call:ok" structure. Here's an example from my WEBSOCKET_API.md:

    ### Create Resource
    ```javascript
    // Create story
    send('resources:create', {
      resource_type: 'story',
      title: 'My New Story',
      content: '',
      tags: {},
      policy: {}
    });
    
    // Create chapter (child of story)
    send('resources:create', {
      resource_type: 'chapter',
      parent_id: 'story_abc123', // This would actually be a UUID
      title: 'Chapter 1'
    });
    
    // Response:
    {
      type: 'resources:create:ok', // <- Note the ":ok"
      resource: { id: '...', resource_type: '...', ... }
    }
    ```
I've got a `request()` helper that makes the async nature of the WebSocket feel more like a synchronous call. Here's what that looks like in action:

    const wsPromise = getWsService(); // Returns the WebSocket singleton
    
    // Create resource (story, chapter, or file)
    async function createResource(data: ResourcesCreateRequest) {
      loading.value = true;
      error.value = null;
      try {
        const ws = await wsPromise;
        const response = await ws.request<ResourcesCreateResponse>(
          "resources:create",
          data // <- The payload
        );
        // resources.value because it's a Vue 3 `ref()`:
        resources.value.push(response.resource); 
        return response.resource;
      } catch (err: any) {
        error.value = err?.message || "Failed to create resource";
        throw err;
      } finally {
        loading.value = false;
      }
    }
For reference, errors are returned in a different, more verbose format where "type" is "error" in the object that the `request()` function knows how to deal with. It used to be ":err" instead of ":ok" but I made it different for a good reason I can't remember right now (LOL).

Aside: There's still THREE firewalls that suck so bad they can't handle WebSockets: SophosXG Firewall, WatchGuard, and McAfee Web Gateway.

jadbox 8 hours ago|||
How do you feel about SSE then?
bityard 14 hours ago|||
The thing that kills me is that Nextcloud had an _amazing_ calendar a few years ago. It was way better than anything else I have used. (And I tried a lot, even the calendar add-on for Thunderbird. Which may or may not be built in these days, I can't keep track.)

Then at some point the Nextcloud calendar was "redesigned" and now it's completely terrible. Aesthetically, it looks like it was designed for toddlers. Functionally, adding and editing events is flat out painful. Trying to specify a time range for an event is weird and frustrating. It's better than not having a calendar, but only just.

There are plenty of open source calendar _servers_, but no good open source web-based calendars that I have been able to find.

jauntywundrkind 17 hours ago||
Sync Conf is next week, and this sort of issue is so part of what I hope maybe can just go away. https://syncconf.dev/

Efforts like Electric SQL to have APIs/protocols for bulk fetching all changes (to a "table") is where it's at. https://electric-sql.com/docs/api/http

It's so rare for teams to do data loading well, rarer still we get effective caching, and often a products footing here only degrades with time. The various sync ideas out there offer such an alluring potential, of having a consistent way to get the client the updated live data they need, in a consistent fashion.

Side note, I'm also hoping the js / TC39 source phase imports proposal aka import source can help let large apps like NextCloud defer loading more of it's JS until needed too. But the waterfall you call out here seems like the real bad side (of NextCloud's architecture)! https://github.com/tc39/proposal-source-phase-imports

dingdingdang 18 hours ago||
Having at some point maintained a soft fork / patch-set for Nextcloud.. yes, there is so much performance left on the table. With a few basic patches the file manager, for example, sped up by magnitudes in terms of render speed.

The issue remains that the core itself feels like layers upon layers of encrusted code that instead of being fixed have just had another layer added ... "something fundamental wrong? Just add Redis as a dependency. Does it help? Unsure. Let's add something else. Don't like having the config in a db? Let's move some of it to ini files (or vice versa)..etc..etc." it feels like that's the cycle and it ain't pretty and I don't trust the result at all. Eventually abandoned the project.

Edit: at some point I reckon some part of the ecosystem recognised some of these issues and hence Owncloud remade a large part of the fundamentals in Golang. It remains unknown to me whether this sorted things or not. All of these projects feel like they suffer badly from "overbuild".

Edit-edit: another layer to add to the mix is that the "overbuild" situation is probably largely what allows the hosting economy around these open source solutions to thrive since Nextcloud and co. are so over-engineered and badly documented that they -require- a dedicated sys-admin team to run well.

INTPenis 17 hours ago||
This is my theory as well. NC has grown gradually in silos almost, every piece of it is some plugin they've imported from contributions at some point.

For example the reason there's no cohesiveness with a common websocket bus for all those ajax calls is because they all started out as a separate plugin.

NC has gone full modularity and lost performance for it. What we need is a more focused and cohesive tool for document sharing.

Honestly I think today with IaC and containers, a better approach for selfhosting is to use many tools connected by SSO instead of one monstrosity. The old Unix philosophy, do one thing but do it well.

eYrKEC2 6 hours ago|||
Why do you need a common websocket bus when h2 interleaves all the HTTP requests over the same SSL tunnel?
rahkiin 15 hours ago|||
This still needs cohesive authorization and central file sharing and access rules across apps. And some central concept of projects to move all content away from people and into the org and roles
redrblackr 17 hours ago||
Two things:

1. Did you open back port request with these basic patches? If you have orders of magnitude speed improvements it would be aswesome to share!

2. You definitively don't need an entire sysadmin team to run nextcloud, in my work (large organisation) there's three instances running (for different parts/purposes of which only one is run by more than one person, and I run myself both my personal instance and for a nonprofit with ~100 persons, it's really not much work after setup (and other systems are plenty of a lot more complicated systems to set up, trust me)

dingdingdang 11 hours ago||
1. There was no point, having thought about it a bit; a lot of the patches (in essence it was at most a handful) revolved around disabling features which in turn could never have been upstreamed. An example was, as mentioned elsewhere in this comment section, the abysmal performance of the thumbnail gen feature, it never cached right, it never worked right and even when it did it would absolutely kill listings of larger folders of media - this was basically hacked out and partially replaced with much simpler gen on images alone, suddenly the file manager worked again for clients.

2. Guess that's debatable, or maybe even skill dependent (mea culpa), and also largely a question of how comfortable one is with systems that cannot be reasoned about cleanly (similar to TFA I just could not stand the bloat, it made me feel more than mildly unwell working with it). Eventually it was GDPR reqs that drove us towards the big G across multiple domains.

On another note it strikes me how the attempts at re-gen'ing folder listings online really is Sisyphus work, there should be a clean way to enfold multiuser/access-tokens into the filesystems of phones/PCs/etc. The closest pseudo example at the moment I guess is classic Google Drive but of course it would need gating and security on the OS side of things that works to a standard across multiple ecosystems (Apple, MS, Android, iPhone, Linux etc.) ... yeeeeah, better keep polishing that HTML ball of spaghetti I guess ;)

madeofpalk 18 hours ago||
I don't think this article actually does a great job of explaining why Nextcloud feels slow. It shows lots of big numbers for MBs of Javascript being downloading, but how does that actually impact the user experience? Is the "slow" Nextcloud just sitting around waiting for these JS assets to load and parse?

From my experience, this doesn't meaningfully impact performance. Performance problems come from "accidentally quadratic" logic in the frontend, poorly optimised UI updates, and too many API calls.

hamburglar 18 hours ago||
It downloads a lot of JavaScript, it decompresses a lot of JavaScript, it parses a lot of JavaScript, it runs a lot of JavaScript, it creates a gazillion onFoundMyNavel event callbacks which all run JavaScript, it does all manner of uncontrolled DOM-touching while its millions of script fragments do their thing, it xhr’s in response to xhrs in response to DOM content ready events, it throws and swallows untold exceptions, has several dozen slightly unoptimized (but not too terrible) page traversals, … the list goes on and on. The point is this all adds up, and having 15MB of code gives a LOT of opportunity for all this to happen. I used to work on a large site where we would break out the stopwatch and paring knife if the homepage got to more than 200KB of code, because it meant we were getting sloppy.
bob1029 17 hours ago|||
15+ megabytes of executable code begins to look quite insane when you start to take a gander at many AAA games. You can produce a non-trivial Unity WebGL build that fits in <10 megabytes.
hamburglar 17 hours ago|||
It’s the kind of code size where you analyze it and find 13 different versions of jquery and a hundred different bespoke console.log wrappers.
72deluxe 17 hours ago|||
Yes and Windows 3.11 came on 6 1.44MB floppy disks. Modern software is so offensive.
hamburglar 17 hours ago||
Windows 3.11 also wasn’t shipped to you over a cellular connection when you clicked on it. If it were, 6x1.44MB would have been considered quite unacceptable.
nikanj 12 hours ago|||
But at least they’re not prematurely optimizing
shermantanktop 18 hours ago||
Agreed. Plus if it truly downloads all of that every time, something has gone wrong with caching.

Overeager warming/precomputation of resources on page load (rather than on use) can be a culprit as well.

hamburglar 17 hours ago||
Relying on cache to cover up a 15MB JavaScript load is a serious crutch.
shermantanktop 11 hours ago||
Oh totally, but - normal caching behavior would lead to different results than reported in the article. It would impact cold-start scenarios, not every page load. So something else is up.
xandrius 12 hours ago||
I know people here don't like it when one answers to complaints about OSS projects with "go fix it then" but seeing the comment section here, it's hard to not at least think it.

About 50-100 people saying that they know exactly why NC is slow, bloated, bad, but fail to a) point out a valid alternative, b) to act and do something about it.

I'm going to say that I love NC despite its slow performance. I own my storage, I can do Google Drive stuff without selling my soul (aka data) to the devil and I can go patch up stuff, since the code is open.

Is downloading lots of JS and waiting a few seconds bad? Yes. But did I pay for any of it? No. Am I the product as a result of choosing NC? Also no.

Having a basic file system with a dropbox alternative and being able to go and "shop" for extensions and extra tools feels so COOL and fun. Do I want to own my password manager? Bam, covered. Do I want to centralise calendar, mail and kanban into one? Bam, covered.

Codebase is AGPL, installs easily and you don't need to do surgery every new update.

I've been running it without hiccups for over 6 years now.

Would I love it to be as fast and smooth as a platform developed by an evil tech behemoth which wants to swallow everyone's data? Of course, am I happy NC exists? Yes!

And if you got this far, dear reader, give it a try. It's free and you can delete it in a second but if you find something to improve and know how, go help, it helps us all :)

aeldidi 3 hours ago|
Yep, this sums it up perfectly for me. I tend to stay away from the extra stuff since the quality is hit or miss (more often hit than miss to be fair), but really there’s something special about having something like it available. I think as a freely available package Nextcloud is immensely valuable to me. I never say anything bad about it without mentioning that in the same breath nowadays.
RiverCrochet 19 hours ago||
I've played around with many self-hosted file manager apps. My first one was Ajaxplorer which then became Pydio. I really liked Pydio but didn't stick with it because it was too slow. I briefly played with Nextcloud but didn't stick with it either.

Eventually I ran into FileRun and loved it, even though it wasn't completely open source. FileRun is fast, worked on both desktop and mobile via browser nicely, and I never had an issue with it. It was free for personal use a few years ago, and unfortunately is not anymore. But it's worth the license if you have the money for it.

I tried setting up SeaFile but I had issues getting it working via a reverse proxy and gave up on it.

I like copyparty (https://github.com/9001/copyparty) - really dead simple to use and quick like FIleRun - but the web interface is not geared towards casual users. I also miss Filerun's "Request a file" feature which worked very nicely if you just wanted someone to upload a file to you and then be done.

accrual 17 hours ago||
On the topic of self-hosted file manager apps, I've really liked "filebrowser". Pair it with Syncthing or another sync daemon and you've got a minimal self-hosted Dropbox clone.

* https://github.com/filebrowser/filebrowser

* https://github.com/hurlenko/filebrowser-docker

iN7h33nD 4 hours ago||
Same. Just recently switch over to filebrowser-quantum. Can’t quite endorse it yet, but it’s promising so far (setup in a docker compose was a bit like wack-a-mole, but so was the original) https://github.com/gtsteffaniak/filebrowser
tripflag 15 hours ago|||
> I also miss Filerun's "Request a file" feature which worked very nicely if you just wanted someone to upload a file to you and then be done.

With the disclaimer that I've never used Filerun, I think this can be replicated with copyparty by means of the "shares" feature (--shr). That way, you can create a temporary link for other people to upload to, without granting access to browse or download existing files. It works like this: https://a.ocv.me/pub/demo/#gf-bb96d8ba&t=13:44

t_mann 17 hours ago||
Copyparty can't (and doesn't want to) replace Nextcloud for many use cases because it supports one-way sync only. The readme is pretty clear about that. I'm toying with the idea of combining it with Syncthing (for all those devices where I don't want to do a full sync), does anybody have experience with that? I've seen some posts that it can lead to extreme CPU usage when combined with other tools that read/write/index the same folders, but nothing specifically about Syncthing.
tripflag 14 hours ago||
Combining copyparty with Syncthing is not something I have tested extensively, but I know people are doing this, and I have yet to hear about any related issues. It's also a usecase I want to support, so if you /do/ hit any issues, please give word! I've briefly checked how Syncthing handles the symlink-based file deduplication, and it seemed to work just fine.

The only precaution I can think of is that copyparty's .hist folder should probably not be synced between devices. So if you intend to share an entire copyparty volume, or a folder which contains a copyparty volume, then you could use the `--hist` global-option or `hist` volflag to put it somewhere else.

As for high CPU usage, this would arise from copyparty deciding to reindex a file when it detects that the file has been modified. This shouldn't be a concern unless you point it at a folder which has continuously modifying files, such as a file that is currently being downloaded or otherwise slowly written to.

aeldidi 14 hours ago||
Nextcloud is something I have a somewhat love-hate relationship with. On one hand, I've used Nextcloud for ~7 years to backup and provide access to all of my family's photos. We can look at our family pictures and memories from any computer, and it's all private and runs mostly without any headaches.

On the other hand, Nextcloud is so far from being something like Google Docs, and I would never recommend it as a general replacement to someone who can't tolerate "jank", for lack of a better word. There are so many small papercuts you'll notice when using it as a power user. Right off the top of my head, uploading large files is finicky, and no amount of web server config tinkering gets it to always work; thumbnail loading is always spotty, and it's significantly slower than it needs to be (I'm talking orders of magnitude).

With all that said, I'm so grateful for Nextcloud since I don't have a replacement, and I would prefer not having all our baby and vacation pictures feeding some big corporation's AI. We really ought to have a safe, private place to store files in 2025 that the average person can wrap their head around. I only wish my family took better advantage of it, since I'm essentially providing them with unlimited storage.

realaaa 10 hours ago|
is Immich that thing? I've played with it, but didn't really dig deeper

they claim they can do it all when it comes to pictures and videos etc

aeldidi 3 hours ago|||
I use my Nextcloud as a general file storage thing, I just emphasized the photo aspect because that’s my family’s main use case.

I have heard of Immich though, perhaps I owe it an honest try someday.

nyadesu 9 hours ago||||
Immich is actually usable, thumbnail previews work without any previous setup, and the mobile app is pretty responsive

Unlike Nextcloud, I feel I can rely on it and feel I can upgrade without issues

aeldidi 3 hours ago||
That sounds really promising, maybe my family would be better suited to something like that.

I will say though, Nextcloud is almost painless when it comes to management. I’ve had one or two issues in the past, but their “all in one” docker setup is pretty solid, I think. It’s what I’ve been using for the last year or so.

jacooper 9 hours ago|||
Immich is way better if all you need is photo storage. It's Google photos level.
poisonborz 2 hours ago||
I agree with the criticism but wonder why are there no alternatives? Nextcloud, for what most people use it, is a rather simple, straightforward collection of apps, yet not even those single apps have alternatives. Eg. show me a good selfhostable web calendar, it doesn't exist.

Why does Nextcloud, or even just parts of it, not have dozens of alternatives?

bogwog 18 hours ago||
Nextcloud is bloated and slow, but it works and is reliable. I've been running a small instance in a business setting with around 8 daily users for many years. It is rock solid and requires zero maintenance.

But people rarely use the web apps. Instead, it's used more like a NAS with the desktop sync client being the primary interface. Nobody likes the web apps because they're slow. The Windows desktop sync client has a really annoying update process, but other than that is excellent.

I could replace it with a traditional NAS, but the main feature keeping me there is an IMAP authentication plugin. This allows users to sign in with their business email/password. It works so well and makes it so much easier to manage user accounts, revoke access, do password resets, etc.

imcritic 18 hours ago|
> Nobody likes the web apps because they're slow.

Web apps don't have to be slow. I prefer web apps over system apps, as I don't have to install extra programs into my system and I have more control over those apps:

- a service decides it's a good idea to load some tracking stuff from 3rd-party? I just uMatrix block it;

- a page has an unwanted element? I just uBlock block it;

- a page could have a better look? I just userstyle style it;

- a page is missing something that could be added on client side? I just userscript script it

Jaxan 16 hours ago||
Do you also prefer a web-based file browser? My main use for Nextcloud is files and a desktop sync is crucial and integrates with the OS.
tripplyons 19 hours ago|
I once discovered and reported a vulnerability in Nextcloud's web client that was due to them including an outdated version of a JavaScript-based PDF viewer. I always wondered why they couldn't just use the browser's PDF viewer. I made $100, which was a large amount to me as a 16 year old at the time.

Here is a blog post I wrote at the time about the vulnerability (CVE-2020-8155): https://tripplyons.com/blog/nextcloud-bug-bounty

rahkiin 18 hours ago|
I recently needed to show a pdf file inside a div in my app. All i wanted was to show it and make it scrollable. The file comes from a fetch() with authorzation headers.

I could not find a way to do this without pdf.js.

silverwind 10 hours ago|||
https://www.npmjs.com/package/pdfobject works well as a wrapper around the <object> tag. No mobile support though.
rahkiin 17 hours ago||||
This made me try it once more and I got something to work with some Blobs, resource URLs, sanitazion and iframes.

So I guess it is possible

tripplyons 17 hours ago||
Yeah, blobs seem like the right way to do it.
rahkiin 15 hours ago||
There does not seem to be a way to configure anything though. It looks quite bad with the default zoom level and the toolbar…
moi2388 18 hours ago|||
The html object tag can just show a pdf file by default. Just fetch it and pass the source there.

What is the problem with that exactly in your case?

jrochkind1 16 hours ago||
I think it can't do that on iOS? Don't know if that is the relevant thing in the choice being discussed though. Not sure about Android.
More comments...