Posted by roywashere 1 day ago
In my experience, solutions like Mailcow, which involve multiple services and containers (such as SMTP, IMAP, Redis, SSO, webmail, Rspamd, etc.), work very well. I have extensive experience running these systems, performing backups, restoring data, and updating their ecosystems.
Additionally, I've had a positive experience setting up and running a self-hosted Sentry instance with Docker for a project that spanned several years. However, this experience might be somewhat outdated, as it was a few years ago.
This lines up with my experience self hosting a headless BI service. In "developer mode" it takes maybe 1GB RAM. But as soon as you want to go prod you need multiple servers with 4+ cores and 16GB+ RAM that need a strongly consistent shared file store. Add confusing docs to the mix, mysterious breakages, incomplete logging and admin APIs, and a reliance on community templates for stuff like k8s deployment... it was very painful. I too gave up on self hosted.
This is caused by short sighted management that need to deliver and move on. "Long term" is a contradiction with their business model. In this case "long term" means "after product launch".
maybe they should put in a system to monitor the docker containers.
It might not do everything Sentry does but it definitely has helped with tracking down some issues, even production ones and runs in a fairly manageable setup (in comparison to how long even the Sentry self-hosted Docker Compose file is).
What’s more, if you want, you can even use regular PostgreSQL as the backing data store (might not be quite as efficient as ElasticSearch for a metrics use case, but also doesn’t eat your RAM like crazy).
1: yes, I'm a huge k8s fanboi and yes I long every day for them to allow me to swap out etcd for something sane
Personally, no hate towards their BanyanDB but after getting burnt by OrientDB in Sonatype Nexus, I very much prefer more widespread options.
- error happens, can be attributed to release 1.2.3
- every subsequent time that error happens to a different user, it can track who was affected by it, without opening a new error report
- your project can opt-in to accepting end-user feedback on error: "please tell us what you were doing when this exploded, or feel free to rant and rave, we read them all"
- it knows from the stack trace that the error is in src/kaboom/onoz.py line 55
- onoz.py:55 was last changed by claude@example.com last week, in PR #666
- sentry can comment upon said PR to advise the reviewers of the bad outcome
- sentry can create a Jira with the relevant details
- claude.manager@example.com can mark the bug as "fixed in the next release", which will cause sentry to suppress chirping about it until it sees a release 1.2.4
- if it happens again it will re-open the prior error report, marking it as a regression
Unless you know something I don't, Grafana does *ABSOLUTELY NONE* of that
The argument that you have to read a sh script doesn't make sense to me. Are you gonna read source code of any software is referenced in this script or any you download too? No? What's the difference between that and a bash script, at the end of the day both can do damage.
Helm is a huge pain in the butt if you have mitigation obligations because the overall supply chain for a 1-command install can involve several different parties, who all update things at different frequencies :/
So chart A includes subchart B, which consumes an image from party C, who haven't updated to foobar X yet. You either need to wait for 3 different people to update stuff to get mainline fixed, or you roll up your sleeves and start rebuilding things, hosting your own images and forking charts. At first you build 1 image and set a value but the problem grows over time.
If you update independently you end up running version combinations of software that the OG vendor has never tested.
This is not helm's fault of course; it's just the reality of deploying software with a lot of moving parts.