Top
Best
New

Posted by rpgbr 1 day ago

Why Nextcloud feels slow to use(ounapuu.ee)
450 points | 341 commentspage 3
rpgbr 1 day ago|
I wonder how does bewCloud[1] stack up against NextCloud, since it's meant to be a “modern and simpler alternative” to it. Has anyone tested it?

[1] https://bewcloud.com/

skeptrune 1 day ago||
I know that this is supposed to be targeted at NextCloud in particular, but I think it's a good standalone "you should care about how much JavaScript you ship" post as well.

What frustrates me about modern web development is that everyone is focused on making it work much more than they are making it sure it works fast. Then when you go to push back, the response is always something like "we need to not spend time over-optimizing."

Sent this straight to the team slack haha.

cbondurant 1 day ago||
I've used nextcloud for close to I think 8 years now as a replacement for google drive.

However my need for something like google drive has reduced massively, and nextcloud continues to be a massive maintenance pain due to its frustratingly fast release cadence.

I don't want to have to log into my admin account and baby it through a new release and migration every four months! Why aren't there any LTS branches? The amount of admin work that nextcloud requires only makes sense for when you legitimately have a whole group of people with accounts that are all utilizing it regularly.

This is honestly the kick in the pants I need to find a solution that actually fits my current use-case. (I just need to sync my fuckin keepass vault to my phone, man.) Syncthing looks promising with significantly less hassle...

caspar 21 hours ago||
Been using syncthing with keepass(X/XC) for probably half a decade now and it works great, especially since KeepassXC has a great built-in merge feature for the rare cases that you get conflicts from modifying your vault on different clients before they sync.

The only major point of friction with syncthing is that you should designate one almost-always-on device as "introducer" for every single one of your devices, so that it will tell all your devices whenever it learns about a new device. Otherwise whenever you gain a device (or reinstall etc) then you have to go to N devices to add your new device there.

Oh, and you can't use syncthing to replicate things between two dirs on the same computer - which isn't a big deal for the keepass usecase and arguably is more of a rsync+cron task anyway but good to be aware of.

jw_cook 1 day ago|||
The linuxserver.io image for Nextcloud requires considerably less babysitting for upgrades: https://docs.linuxserver.io/images/docker-nextcloud

As long as you only upgrade one major version at a time, it doesn't require putting the server in maintenance mode or using the occ cli.

xandrius 1 day ago|||
Been running NC on my home server and basically maybe update it once a year or so? Even less probably, so definitely not a must to update every time. Plus via snap it's pretty simple.
tracker1 1 day ago||
Might also consider Vaultwarden/Bitwarden as a self-host alternative. Yeah it's client-server... that said, been pretty happy as a user.
zeppelin101 1 day ago||
The major shortcoming of NextCloud, in my opinion, is that that it's not able to do sync over LAN. Imagine wanting to synchronize 1TB+ of data and not being able to do so over a 1 Gbps+ local connection, when another local device has all the necessary data. There is some workaround involving "split DNS", but I haven't gotten around to it. Other than that, I thought NC was absolutely fantastic.
jw_cook 1 day ago||
Check if your router has an option to add custom DNS entries. If you're using OpenWRT, for example, it's already running dnsmasq, which can do split DNS relatively easily: https://blog.entek.org.uk/notes/2021/01/05/split-dns-with-dn...

If not, and you don't want to set up dnsmasq just for Nextcloud over LAN, then DNS-based adblock software like AdGuard Home would be a good option (as in, it would give you more benefit for the amount of time/effort required). With AdGuard, you just add a line under Filters -> DNS rewrites. PiHole can do this as well (it's been awhile since I've used it, but I believe there's a Local DNS settings page).

Otherwise, if you only have a small handful of devices, you could add an entry to /etc/hosts (or equivalent) on each device. Not pretty, but it works.

zeppelin101 1 day ago||
That's a good tip. I had my local self-hosting phase during covid, but if I ever come back to it, I'll try this.
accrual 1 day ago|||
I had a similar issue with a public game server that required connecting through the WAN even if clients were local on the LAN. I considered split DNS (resolving the name differently depending on the source) but it was complicated for my setup. Instead I found a one-line solution on my OpenBSD router:

    pass in on $lan_if inet proto tcp to (egress) port 12345 rdr-to 192.168.1.10
It basically says "pass packets from the LAN interface towards the WAN (egress) on the game port and redirect the traffic to the local game server". The local client doesn't know anything happened, it just worked.
DrammBA 1 day ago|||
> The major shortcoming of NextCloud, in my opinion, is that that it's not able to do sync over LAN.

That’s an interesting way to describe a lack of configuration on your part.

Imagine me saying: "The major shortcoming of Google drive, in my opinion, is that that it's not able to sync files from my phone. There is some workaround involving an app called 'Google drive' that I have to install on my phone, but I haven't gotten around to it. Other than that, Google drive is absolutely fantastic.

zeppelin101 1 day ago||
I don't know why the sarcasm is so necessary. I very much enjoyed Nextcloud and I proudly ran it for the better part of a year. I even ran various NC-ecosystem apps, such as the Office ones. However, my objective was to try it out from the standpoint of regular self-hosting. I wanted to contrast the 'out-of-the-box' experience to Dropbox, which I had been using for many years up to that point. Yes, one was centrally hosted, while the other was self-hosted, but still, that was the experiment I was running. So I'm sorry if I didn't live up to your standards of what a user should be doing to their software, but I sure had lots of fun self-hosting tons of software at that time.
DrammBA 1 day ago||
Not sure why you took it so personally, I was simply pointing out that if you don't configure a feature then that feature would obviously not work, for example phone sync for google drive won't work if you don't download the google drive app, and lan access for nextcloud won't work if you don't set up lan access.
immibis 1 day ago||
Except your phone comes with Google Drive and syncs things you don't want it to, so Google can scan your life better.
DrammBA 18 hours ago||
Last time I checked my iPhone didn't come with Google drive
redrblackr 1 day ago|||
Or just use ipv6!

You could also upload directly to the filesystem and then run occ files:scan, or if the storage is mounted as external it just works.

Another method is to set your machines /etc/hosts (or equivalent) to the local IP of the instance (if the device is only on lan you can keep it, otherwise remove it after the large transfer).

Now your rounter should not send traffic to itself away, just loop it internally so it never has to go over your isps connection - so running over lan only helps if your switch is faster than your router..

zeppelin101 1 day ago||
Good to know!
tfvlrue 1 day ago|||
> it's not able to do sync over LAN

I'm curious what you mean by this. I've never had trouble syncing files with the Nextcloud client, inside or outside of my LAN. I didn't do anything special to make it work internally. It's definitely not the fastest thing ever, but it works pretty seamlessly in my experience.

Jaxan 1 day ago||
I use it on LAN without a problem (using mDNS). Sure it runs with self signed certificates, but that’s ok with me.
dugite-code 1 day ago||
In my experience the bottle neck for any nextcloud install is typically the database.

Unlike many other projects it's surprisingly easy to get in a situation where the db is throttling due to IO issues on a single box machine. Having the db at on a seperate drive from the storage and logging really speeds things up.

That and setting up a lot of the background tasks like image preview generation, redis ect properly.

andai 1 day ago||
For reference, 20 MB is three hundred and thirteen Commodores.
robin_reala 1 day ago||
The complete Doom 2, including all graphics, maps, music and sound effects, shipped on 4 floppies, totalling 5.76MB.
zdragnar 1 day ago||
The original Doom 2 ran 64,000 pixels (320x200). 4k UHD monitors now show 8.3 million pixels.

YMMV.

Of course, Doom 2 is full of Carmack shenanigans to squeeze every possible ounce of performance out of every byte, written in hand optimized C and assembly. Nextcloud is delivered in UTF-8 text, in a high level scripting language, entirely unoptimized with lots of low hanging fruit for improvement.

Yie1cho 1 day ago|||
yes, but why isn't it optimised? not as extreme as doom had to be, but to be a bit better? especially the low hanging fruits.

this is why i think there's another version for customers who are paying for it, with tuning, optimization, whatever.

trashb 1 day ago||||
Sure but i doubt there is more image data in the delivered nextcloud data compared to doom2, games famously need textures where a website usually needs mostly vector and css based graphics.

Actually Carmack did squeeze every possible ounce of performance out of DOOM, however that does not always mean he was optimizing for size. If you want to see a project optimized for size you might check out ".kkrieger" from ".theprodukkt" which accomplishes a 3d shooter in 97,280bytes.

You know how many characters 20MB of UTF-8 text is right? If we are talking about javascript it's probably mostly ascii so quite close to 20 million characters. If we take a wild estimate of 80 characters per line that would be 250000 lines of code.

I personally think 20MB is outrageous for any website, webapp or similar. Especially if you want to offer a product to a wide range of devices on a lot of different networks. Reloading a huge chunk of that on every page load feels like bad design.

Developers usually take for granted the modern convenience of a good network connection, imagine using this on a slow connection it would be horrid. Even in the western "first world" countries there are still quite some people connecting with outdated hardware or slow connections, we often forget them.

If you are making any sort of webapp you ideally have to think about every byte you send to your customer.

hamburglar 1 day ago||||
I mean, if you’re going to include carmack’s relentless optimizer mindset in the description, I feel like your description of the NextCloud situation should probably end with “and written by people who think shipping 15MB of JavaScript per page is reasonable.”
ekjhgkejhgk 1 day ago|||
You know apps don't store pixels, right? So why are you counting pixels?
zdragnar 1 day ago||
A single picture that looks decent on a modern screen, taken from a modern camera, can easily be larger than the original Doom 2 binary.
ekjhgkejhgk 1 day ago||
You don't need pictures for a CRUD app. Should all be vectorial in any case.
mrweasel 1 day ago|||
The article suggests that it takes 14MB of Javascript to do just the calendar. I doubt that all of my calendar events for 2025 is 14MB.
magicalhippo 1 day ago|||
Or the same number of 64k intros[1][2][3]...

[1]: https://www.youtube.com/watch?v=iXgseVYvhek

[2]: https://www.youtube.com/watch?v=ZWCQfg2IuUE

[3]: https://www.youtube.com/watch?v=4lWbKcPEy_w

chaostheory 1 day ago||
Sure, but what people leave out is that it’s mostly C and assembly. That just isn’t realistic anymore if you want a better developer experience that leads to faster feature rollout, better security, and better stabilty.

This is like when people reminisce about the performance of windows 95 and its apps while forgetting about getting a blue screen of death every other hour.

trashb 1 day ago|||
Exactly javascript is a higher level language with a lot of required functionality build in. When compared to C you would need to (for most tasks) write way less actual code in javascript to achieve the same result, for example graphics or maths routines. Therefore it's crazy that it's that big.
tracker1 1 day ago||||
I think it's a double edged sword of Open-Source/FLOSS... some problems are hard and take a lot of effort. One example I consistently point to is core component libraries... React has MUI and Mantine, and I'm not familiar with any open-source alternatives that come close. As a developer, if there was one for Leptos/Yew/Dioxus, I'd have likely jumped ship to Rust+WASM. They're all fast enough with different advantges and disadvantages.

All said... I actually like TypeScript and React fine for teams of developers... I think NextCloud likely has coordination issues that go beyond the language or even libraries used.

magicalhippo 1 day ago|||
Windows 2000 was quite snappy on my Pentium 150, and pretty rock solid. It was when I stopped being good at fixing computers because it just worked, so I didn't get much practice.
tracker1 1 day ago|||
I did get a BSOD from a few software packages in Win2k, but it was fewer and much farther between than Win9x/me... I didn't bump to XP until after SP3 came out... I also liked Win7 a lot. I haven't liked much of Windows since 7 though.

Currently using Pop + Cosmic.

chaostheory 1 day ago|||
Win2000 is in the same class as Win95 despite being slightly more stable. It still locked up and crashed more frequently than modern software.
magicalhippo 1 day ago||
Then you did something special. For me Win2k was at least three orders of magnitude more stable, and based on my buddies that was not exceptional.
esafak 1 day ago||
Does anyone know what they are doing wrong to create such large bundles? What is the lesson here?
bastawhiz 1 day ago||
Not paying attention.

1. Indiscriminate use of packages when a few lines of code would do.

2. Loading everything on every page.

3. Poor bundling strategy, if any.

4. No minification step.

5. Polyfilling for long dead, obsolete browsers

6. Having multiple libraries that accomplish the same thing

7. Using tools and then not doing any optimization at all (like using React and not enabling React Runtime)

Arguably things like an email client and file storage are apps and not pages so a SPA isn't unreasonable. The thing is, you don't end up with this much code by being diligent and following best practices. You get here by being lazy or uninformed.

nullgeo 1 day ago||
What is React runtime? I looked it up and the closest thing I came across is the newly announced React compiler. I have a vested interest in this because currently working on a micro-SaaS that uses React heavily and still suffering bundle bloat even after performing all the usual optimizations.
bastawhiz 1 day ago|||
When you compile JSX to JavaScript, it produces a series of function calls representing the structure of the JSX. In a recent major version, React added a new set of functions which are more efficient at both runtime and during transport, and don't require an explicit import (which helps cut down on unnecessary dependencies).
silverwind 1 day ago||
You mean the automatic runtime introduced in 2020. It does not have any impact on the performance, it's just a pure developer UX improvement.
bastawhiz 1 day ago||
It improves the bundle size for most apps because the imported functions can be minified better. Depending on your bundler, it can avoid function calls at runtime.
adzm 1 day ago|||
React compiler is awesome for minimizing unnecessary renders but doesn't help with bundle size; might even make it worse. But in my experience it really helps with runtime performance if your code was not already highly optimized.
eMerzh 1 day ago||
I think, some of the issues here is that first nextcloud tries to be compatible with any managed / mutualized hosting.

They also treat every "module"/"apps" whatever you call it, as completely distinct spa without proving much of a sdk/framework. Which mean each app, add is own deps, manage is own build, etc...

Also don't forget that app can even be a part of a screen not the whole thing

janikvonrotz 1 day ago||
Most people here seem to have experienced a Nextcloud version from 3 years ago.

In version 31 the frontend has been rewritten in Vue and with Nextcloud Office aka Collabora Online you get much more than a shitty GDocs.

Of course some apps like the calendar have not been rewritten.

Most readers do not understand what it takes to rewrite the frontend for an entire ecosystem.

estimator7292 1 day ago||
Like most of us I think, I really, really wanted to like nextcloud. I put it on an admittedly somewhat slow dual Xeon server, gave it all 32 threads and many, many gigabytes of ram.

Even on a modern browser on a brand new leading-edge computer, it was completely unusably slow.

Horrendous optimization aside, NC is also chasing the current fad of stripping out useful features and replacing them with oceans of padding. The stock photos app doesn't even have the ability to sort by date!. That's been table stakes for a photo viewer since the 20th goddamn century.

When Windows Explorer offers a more performant and featureful experience, you've fucked up real bad.

I would feel incredibly bad and ashamed to publish software in the condition that NextCloud is in. It is IMO completely unacceptable.

buibuibui 1 day ago|
I find the Nextcloud client really buggy on the Mac, especially the VFS integration. The file syncing is also really slow. I switched back to P2P file syncing via Syncthing and Resilio Sync out of frustration.
More comments...