What frustrates me about modern web development is that everyone is focused on making it work much more than they are making it sure it works fast. Then when you go to push back, the response is always something like "we need to not spend time over-optimizing."
Sent this straight to the team slack haha.
However my need for something like google drive has reduced massively, and nextcloud continues to be a massive maintenance pain due to its frustratingly fast release cadence.
I don't want to have to log into my admin account and baby it through a new release and migration every four months! Why aren't there any LTS branches? The amount of admin work that nextcloud requires only makes sense for when you legitimately have a whole group of people with accounts that are all utilizing it regularly.
This is honestly the kick in the pants I need to find a solution that actually fits my current use-case. (I just need to sync my fuckin keepass vault to my phone, man.) Syncthing looks promising with significantly less hassle...
The only major point of friction with syncthing is that you should designate one almost-always-on device as "introducer" for every single one of your devices, so that it will tell all your devices whenever it learns about a new device. Otherwise whenever you gain a device (or reinstall etc) then you have to go to N devices to add your new device there.
Oh, and you can't use syncthing to replicate things between two dirs on the same computer - which isn't a big deal for the keepass usecase and arguably is more of a rsync+cron task anyway but good to be aware of.
As long as you only upgrade one major version at a time, it doesn't require putting the server in maintenance mode or using the occ cli.
If not, and you don't want to set up dnsmasq just for Nextcloud over LAN, then DNS-based adblock software like AdGuard Home would be a good option (as in, it would give you more benefit for the amount of time/effort required). With AdGuard, you just add a line under Filters -> DNS rewrites. PiHole can do this as well (it's been awhile since I've used it, but I believe there's a Local DNS settings page).
Otherwise, if you only have a small handful of devices, you could add an entry to /etc/hosts (or equivalent) on each device. Not pretty, but it works.
pass in on $lan_if inet proto tcp to (egress) port 12345 rdr-to 192.168.1.10
It basically says "pass packets from the LAN interface towards the WAN (egress) on the game port and redirect the traffic to the local game server". The local client doesn't know anything happened, it just worked.That’s an interesting way to describe a lack of configuration on your part.
Imagine me saying: "The major shortcoming of Google drive, in my opinion, is that that it's not able to sync files from my phone. There is some workaround involving an app called 'Google drive' that I have to install on my phone, but I haven't gotten around to it. Other than that, Google drive is absolutely fantastic.
You could also upload directly to the filesystem and then run occ files:scan, or if the storage is mounted as external it just works.
Another method is to set your machines /etc/hosts (or equivalent) to the local IP of the instance (if the device is only on lan you can keep it, otherwise remove it after the large transfer).
Now your rounter should not send traffic to itself away, just loop it internally so it never has to go over your isps connection - so running over lan only helps if your switch is faster than your router..
I'm curious what you mean by this. I've never had trouble syncing files with the Nextcloud client, inside or outside of my LAN. I didn't do anything special to make it work internally. It's definitely not the fastest thing ever, but it works pretty seamlessly in my experience.
Unlike many other projects it's surprisingly easy to get in a situation where the db is throttling due to IO issues on a single box machine. Having the db at on a seperate drive from the storage and logging really speeds things up.
That and setting up a lot of the background tasks like image preview generation, redis ect properly.
YMMV.
Of course, Doom 2 is full of Carmack shenanigans to squeeze every possible ounce of performance out of every byte, written in hand optimized C and assembly. Nextcloud is delivered in UTF-8 text, in a high level scripting language, entirely unoptimized with lots of low hanging fruit for improvement.
this is why i think there's another version for customers who are paying for it, with tuning, optimization, whatever.
Actually Carmack did squeeze every possible ounce of performance out of DOOM, however that does not always mean he was optimizing for size. If you want to see a project optimized for size you might check out ".kkrieger" from ".theprodukkt" which accomplishes a 3d shooter in 97,280bytes.
You know how many characters 20MB of UTF-8 text is right? If we are talking about javascript it's probably mostly ascii so quite close to 20 million characters. If we take a wild estimate of 80 characters per line that would be 250000 lines of code.
I personally think 20MB is outrageous for any website, webapp or similar. Especially if you want to offer a product to a wide range of devices on a lot of different networks. Reloading a huge chunk of that on every page load feels like bad design.
Developers usually take for granted the modern convenience of a good network connection, imagine using this on a slow connection it would be horrid. Even in the western "first world" countries there are still quite some people connecting with outdated hardware or slow connections, we often forget them.
If you are making any sort of webapp you ideally have to think about every byte you send to your customer.
[1]: https://www.youtube.com/watch?v=iXgseVYvhek
This is like when people reminisce about the performance of windows 95 and its apps while forgetting about getting a blue screen of death every other hour.
All said... I actually like TypeScript and React fine for teams of developers... I think NextCloud likely has coordination issues that go beyond the language or even libraries used.
Currently using Pop + Cosmic.
1. Indiscriminate use of packages when a few lines of code would do.
2. Loading everything on every page.
3. Poor bundling strategy, if any.
4. No minification step.
5. Polyfilling for long dead, obsolete browsers
6. Having multiple libraries that accomplish the same thing
7. Using tools and then not doing any optimization at all (like using React and not enabling React Runtime)
Arguably things like an email client and file storage are apps and not pages so a SPA isn't unreasonable. The thing is, you don't end up with this much code by being diligent and following best practices. You get here by being lazy or uninformed.
They also treat every "module"/"apps" whatever you call it, as completely distinct spa without proving much of a sdk/framework. Which mean each app, add is own deps, manage is own build, etc...
Also don't forget that app can even be a part of a screen not the whole thing
In version 31 the frontend has been rewritten in Vue and with Nextcloud Office aka Collabora Online you get much more than a shitty GDocs.
Of course some apps like the calendar have not been rewritten.
Most readers do not understand what it takes to rewrite the frontend for an entire ecosystem.
Even on a modern browser on a brand new leading-edge computer, it was completely unusably slow.
Horrendous optimization aside, NC is also chasing the current fad of stripping out useful features and replacing them with oceans of padding. The stock photos app doesn't even have the ability to sort by date!. That's been table stakes for a photo viewer since the 20th goddamn century.
When Windows Explorer offers a more performant and featureful experience, you've fucked up real bad.
I would feel incredibly bad and ashamed to publish software in the condition that NextCloud is in. It is IMO completely unacceptable.