Posted by plesiv 2 days ago
I must be getting old but building a gigantic house of card of interlinked components only to arrive to a more limited solution is truly bizarre to me.
The maintenance burden for a VPS: periodically run apt update upgrade. Use filesystem snapshots to create periodic backups. If something happens to your provider, spin up a new VM elsewhere with your last snapshot.
The maintenance burden for your solution: Periodically merge upstream libgit2 in your custom fork, maintain your custom git server code and audit it for vulnerabilities, make sure everything still compiles with emscripten, deploy it. Rotate API keys to make sure your database service can talk to your storage service and your worker service. Then I don't even know how you'd backup all this to get it back online quickly if something happened to cloudflare. And all that only to end up with worse latency than a VPS, and more size constraints on the repo and objects.
But hey, at least it scales infinitely!
And make sure it reboots for kernel upgrades (or set up live-patching), and make sure that service updates don't go wrong[0], and make sure that your backups work consistently, and make sure that you're able to vertically or horizontally scale, and make sure it's all automated and repeatable, and make sure the automation is following best-practices, and make sure you're not accidentally configuring any services to be vulnerable[1], and ...
Making this stuff be someone else's problem by using managed services is a lot easier, especially with a smaller team, because then you can focus on what you're building and not making sure your SPOF VPS is still running correctly.
[0] I self-host some stuff for a side-project right now, and packages updates are miserable because they're not simply `apt-get update && apt-get upgrade`. Instead, the documented upgrade process for some services is more/less "dump the entire DB, stop the service, rm -rf the old DB, upgrade the service package, start the service, load the dump in, hope it works."
[1] Because it's so easy to configure something to be vulnerable because it makes it easier, even if the vulnerability was unintentional.
There's only a difference here because there exist off-the-shelf git packages for traditional VPS environments but there do not yet exist off-the-shelf git packages for serverless stacks. The OP is a pioneer here. The work they are doing is what will eventually make this an off-the-shelf thing for everyone else.
> Rotate API keys to make sure your database service can talk to your storage service and your worker service.
Huh? With Durable Objects the storage is local to each object. There is no API key involved in accessing it.
> Then I don't even know how you'd backup all this
Durable Object storage (under the new beta storage engine) automatically gives you point-in-time recovery to any point in time in the last 30 days.
https://developers.cloudflare.com/durable-objects/api/storag...
> And all that only to end up with worse latency than a VPS
Why would it be worse? It should be better, because Cloudflare can locate each DO (git repo) close to whoever is accessing it, whereas your VPS is going to sit in one single central location that's probably further away.
> and more size constraints on the repo and objects.
While each individual repo may be more constrained, this solution can scale to far more total repos than a single-server VPS could.
(I'm the tech lead for Cloudflare Workers.)
I am ending up with AWS lambdas. Not only that solves the Wasm issue but you can have up to 10Gb of memory on a single instance. That is close to enough for most use cases. 100Mb? Not really.
I would love to use this to serve as a live/working automatic backup for my github repos on CF infrastructure.
Between libgit2 on emscripten, the number of file writes to DO, etc, how is performance?
It provides client and server API. The latter is used by Gerrit for its server. https://www.gerritcodereview.com
Not sure what the Java to WASM story is if that is a requirement for what they need.
Unfortunately, the entrepreneur in me continues that thought with "work that could have gone into finding customers instead". Now you have a system that could store "infinite" git repos, but how many customers?
But I can't figure out what makes this an AI company. Seems like a collaboration tool?