Top
Best
New

Posted by max_lt 1/1/2026

Show HN: OpenWorkers – Self-hosted Cloudflare workers in Rust(openworkers.com)
I've been working on this for some time now, starting with vm2, then deno-core for 2 years, and recently rewrote it on rusty_v8 with Claude's help.

OpenWorkers lets you run untrusted JS in V8 isolates on your own infrastructure. Same DX as Cloudflare Workers, no vendor lock-in.

What works today: fetch, KV, Postgres bindings, S3/R2, cron scheduling, crypto.subtle.

Self-hosting is a single docker-compose file + Postgres.

Would love feedback on the architecture and what feature you'd want next.

500 points | 158 comments
bob1029 1/1/2026|
> It brings the power of edge computing to your own infrastructure.

I like the idea of self-hosting, but it seems fairly strongly opposed to the concept of edge computing. The edge is only made possible by big ass vendors like Cloudflare. Your own infrastructure is very unlikely to have 300+ points of presence on the global web. You can replicate this with a heterogeneous fleet of smaller and more "ethical" vendors, but also with a lot more effort and downside risk.

patmorgan23 1/1/2026||
But do you need 300 pops to benefit from the edge model? Or would 10 pops in your primary territory be enough.
nrhrjrjrjtntbt 1/1/2026|||
For most applications 1 location is probably good enough.I assume HN is single location and I am a lomg way from CA but have no speed issues.

Cavaet for high scale sites and game servers. Maybe for image heavy sites too (but self hosting then adding a CDN seems like a low lock in and low cost option)

locknitpicker 1/1/2026||
> For most applications 1 location is probably good enough.

If your usecase doesn't require redundancy or high-availability, why would you be using something like Cloudflare to start with?

robertcope 1/1/2026|||
Security. I host personal sites on Linodes and other external servers. There are no inbound ports open to the world. Everything is accessed via Cloudflare Tunnels and locked down via their Zero Trust services. I find this useful and good, as I don't really want to have to develop my personal services to the point where I'd consider them hardened for public internet access.
h33t-l4x0r 1/2/2026||
Not even ssh? What happens if cloudflare goes down?
c0balt 1/2/2026|||
Not oc, but services like Linode often offer "console" access via a virtualized tty for VPS systems.

Having a local backup user is a viable backup path then. If you wire up pam enough you can even use MFA for local login.

robertcope 1/3/2026||||
Then I log in to Linode or whatever and open a hole up in the firewall. That's easy. But Cloudflare rarely goes down, not really something I worry about.
nwellinghoff 1/2/2026|||
You could restrict the ssh port by ip as well.
RandomDistort 1/1/2026||||
When you have a simple tool you have written for yourself, that you need to be reliable and accessible but also that you don't use frequently enough that it's worth the bother of running on your own server with all of that setup and ongoing maintenance.
max_lt 1/2/2026||||
The DX is great: simple deployment, no containers, no infra to manage. I build a lot of small weekend projects that I don't want to maintain once shipped. OpenWorkers gives you the same model when you need compliance or data residency.
gpm 1/1/2026||||
Free bandwidth. (Also the very good sibling-answer about tunnels).
Hamuko 1/2/2026||||
Cloudflare gives me free resources. If they tomorrow reduced my blog to be available on a single region only, I'd shrug and move on with my day.
nrhrjrjrjtntbt 1/2/2026||||
It takes a minute to setup for CDN usecase.
NicoJuicy 1/1/2026|||
Price
andrewaylett 1/1/2026||||
Honestly, for my own stuff I only need one PoP to be close to my users. And I've avoided using Cloudflare because they're too far away.

More seriously, I think there's a distinction between "edge-style" and actual edge that's important here. Most of the services I've been involved in wouldn't benefit from any kind of edge placement: that's not the lowest hanging fruit for performance improvements. But that doesn't mean that the "workers" model wouldn't fit, and indeed I suspect that using a workers model would help folk architect their stuff in a form that is not only more performant, but also more amenable to edge placement.

locknitpicker 1/1/2026||||
> But do you need 300 pops to benefit from the edge model? Or would 10 pops in your primary territory be enough.

I don't think that the number of PoPs is the key factor. The key factor is being able to route requests based on a edge-friendly criteria (latency, geographical proximity, etc) and automatically deploy changes in a way that the system ensures consistency.

This sort of projects do not and cannot address those concerns.

Targeting the SDK and interface is a good hackathon exercise, but unless you want to put together a toy runtime to do some local testing, this sort of project completely misses the whole reason why this sort of technology is used.

trevor-e 1/1/2026||||
I agree, latency is very important and 300 pops is great, but seems more for marketing and would see diminishing returns for the majority of applications.
st3fan 1/1/2026|||
many apps are fine on a single server
closingreunion 1/1/2026||
Is some sort of decentralised network of hosts somehow working together to challenge the Cloudflare hegemony even plausible? Would it be too difficult to coordinate in a safe and reliable way?
geysersam 1/1/2026||
If you have a central database, what benefits are you getting from edge compute? This is a serious question. As far as I understand edge computing is good for reducing latency. If you have to communicate with a non-edge database anyway, is there any advantage from being on the edge?
csomar 1/2/2026|||
Databases in Cloudflare are not edge. That is, they are tied to a central location. Where workers help is async stateless tasks. There are a lot of these (authentication, email, notifications, etc.)
h33t-l4x0r 1/2/2026||
It has edge replicas though. You're talking about d1, right?
martinald 1/1/2026|||
Well you can cache stuff and also use read replicas. But yes, you are correct. For 'write' it doesn't help as much to say the least. But for some (most?) sites they are 99.9% read...
simonw 1/1/2026||
The problem with sandboxing solutions is that they have to provide very solid guarantees that code can't escape the sandbox, which is really difficult to do.

Any time I'm evaluating a sandbox that's what I want to see: evidence that it's been robustly tested against all manner of potential attacks, accompanied by detailed documentation to help me understand how it protects against them.

This level of documentation is rare! I'm not sure I can point to an example that feels good to me.

So the next thing I look for is evidence that the solution is being used in production by a company large enough to have a dedicated security team maintaining it, and with real money on the line for if the system breaks.

max_lt 1/1/2026||
Fair point. The V8 isolate provides memory isolation, and we enforce CPU limits (100ms) and memory caps (128MB). Workers run in separate isolates, not separate processes, so it's similar to Cloudflare's model. That said, for truly untrusted third-party code, I'd recommend running the whole thing in a container/VM as an extra layer. The sandboxing is more about resource isolation than security-grade multi-tenancy.
gpm 1/1/2026||
I think you should consider adjusting the marketing to reflect this. "untrusted JavaScript" -> "JavaScript", "Secure sandboxing with CPU (100ms) and memory (128MB) limits per worker" -> "Sandboxing with CPU (100ms) and memory (128MB) limits per worker", overhauling https://openworkers.com/docs/architecture/security.

Over promising on security hurts the credibility of the entire project - and the main use case for this project is probably executing trusted code in a self hosted environment not "execut[ing] untrusted code in a multi-tenant environment".

max_lt 1/1/2026||
Great point, thanks. Just updated the site – removed "untrusted" and "secure", added a note clarifying the threat model
m11a 1/1/2026|||
I agree, and as much as I think AI helps productivity, for a high security solution,

> Recently, with Claude's help, I rewrote everything on top of rusty_v8 directly.

worries me

CuriouslyC 1/1/2026||
Have you tried Opus 4.5?
samwillis 1/1/2026|||
Yes, exactly. The other reason Cloudflare workers runtime is secure is that they are incredibly active at keeping it patched and up to date with V8 main. It's often ahead of Chrome in adopting V8 releases.
oldmanhorton 1/1/2026|||
I didn’t know this, but there are also security downsides to being ahead of chrome — namely, all chrome releases take dependencies on “known good” v8 release versions which have at least passed normal tests and minimal fuzzing, but also v8 releases go through much more public review and fuzzing by the time they reach chrome stable channel. I expect if you want to be as secure as possible, you’d want to stay aligned with “whatever v8 is in chrome stable.”
kentonv 1/1/2026||
Cloudflare Workers often rolls out V8 security patches to production before Chrome itself does. That's different from beta vs. stable channel. When there is a security patch, generally all branches receive the patch at about the same time.

As for beta vs. stable, Cloudflare Workers is generally somewhere in between. Every 6 weeks, Chrome and V8's dev branch is promoted to beta, beta branch to stable, and stable becomes obsolete. Somewhere during the six weeks between verisons, Cloudflare Workers moves from stable to beta. This has to happen before the stable version becomes obsolete, otherwise Workers would stop receiving security updates. Generally there is some work involved in doing the upgrade, so it's not good to leave it to the last moment. Typically Workers will update from stable to beta somewhere mid-to-late in the cycle, and then that beta version subsequently becomes stable shortly thereafter.

(I'm the lead engineer for Cloudflare Workers.)

max_lt 1/2/2026||
Thanks for the clarification on CF's V8 patching strategy, that 24h turnaround is impressive and exactly why I point people to Cloudflare when they need production-grade multi-tenant security.

OpenWorkers is really aimed at a different use case: running your own code on your own infra, where the threat model is simpler. Think internal tools, compliance-constrained environments, or developers who just want the Workers DX without the vendor dependency.

Appreciate the work you and the team have done on Workers, it's been the inspiration for this project for years.

ForHackernews 1/1/2026|||
Not if you're self-hosting and running your own trusted code, you don't. I care about resource isolation, not security isolation, between my own services.
twosdai 1/1/2026||
Completely agree. There are some apps that unfortunately need to care about some level of security isolation, but with an open workers they could just put those specific workers on their own isolated instance.
imcritic 1/1/2026|||
I don't think what you want us even possible. How would such guarantees even look like? "Hello, we are a serious cybersec firm and we have evaluated the code and it's pretty sound, trust us!"?

"Hello, we are a serious cybersec firm and we have evaluated the code and here are our test with results that proof that we didn't find anything, the code is sound; Have we been through? We have, trust us!"

gpm 1/1/2026|||
In terms of a one off product without active support - the only thing I can really imagine is a significant use of formal methods to prove correctness of the entire runtime. Which is of course entirely impractical given the state of the technology today.

Realistically security these days is an ongoing process, not a one off, compare to cloudflare's security page: https://developers.cloudflare.com/workers/reference/security... (to be clear when I use the pronoun "we" I'm paraphrasing and not personally employed by cloudflare/part of this at all)

- Implicit/from other pieces of marketing: We're a reputably company with these other big reputable companies who care about security and are juicy targets for attacks using this product.

- We update V8 within 24 hours of a security update, compared to weeks for the big juicy target of Google Chrome.

- We use various additional sandboxing techniques on top of V8, including the complete lack of high precision timers, and various OS level sandboxing techniques.

- We detect code doing strange things and move it out of the multi-tennant environment into an isolated one just in case.

- We detect code using APIs that increase the surface area (like debuggers) and move it out of the multi-tennant environment into an isolated on just in case.

- We will keep investing in security going forwards.

Running secure multi-tenant environments is not an easy problem. It seems unlikely that it's possible for a typical open source project (typical in terms of limited staffing, usually including a complete lack of on-call staff) to release software to do so today.

max_lt 1/1/2026||
Agreed. Cloudflare has dedicated security teams, 24h V8 patches, and years of hardening – I can't compete with that. The realistic use case for OpenWorkers is running your own code on your own infra, not multi-tenant SaaS. I will update the docs to reflect this.
AgentME 1/1/2026||||
Something like "all code is run with no permissions to the filesystem or external IO by default, you have to do this to add fine-grained permissions for IO, the code is run within an unprivileged process that's sandboxed using standard APIs to defend in depth against possible v8 vulnerabilities, here's how this system protects against obvious possible attacks..." would be pretty good. Obviously it's not proof it's all implemented perfectly, but it would be a quick sign that the project is miles ahead of a naive implementation, and it would give someone interested some good pointers on what parts to start reviewing.
max_lt 1/2/2026||
This is exactly where we see things heading. The trust model is shifting - code isn't written by humans you trust anymore, it's generated by models that can be poisoned, confused, or just pick the wrong library.

We're thinking about OpenWorkers less as "self-hosted Cloudflare Workers" and more as a containment layer for code you don't fully control. V8 isolates, CPU/memory limits, no filesystem access, network via controlled bindings only.

We're also exploring execution recording - capture all I/O so you can replay and audit exactly what the code did.

Production bug -> replay -> AI fix -> verified -> deployed.

simonw 1/1/2026||||
That's the problem! It's really hard to find trustworthy sandboxing solutions, I've been looking for a long time. It's kind of my white whale.
laurencerowe 1/1/2026|||
As I understand it separate isolates in a single process are inherently less secure than separate processes (e.g. Chrome's site isolation) which is again less secure than virtualization based solutions.

As a TinyKVM / KVM Server contributor I'm obviously hopeful our approach will work out, but we still have some way to go to get to a level of polish that makes it easy to get going with and have the confidence of production level experience.

TinyKVM has the advantage of a much smaller surface area to secure as a KVM based solution and the ability to offer fast per-request isolation as we can reset the VM state a couple of orders of magnitude faster than v8 can create a new isolate from a snapshot.

https://github.com/libriscv/kvmserver

indigodaddy 1/1/2026|||
I imagine you messed about with Sandstorm back in the day?
d4mi3n 1/1/2026|||
Other response address how you could go about this, but I'd just like to note that you touch on the core problem of security as a domain: At the end of the day, it's a problem of figuring out who to trust, how much to trust them, and when those assessments need to change.

To use your example: Any cybersecurity firm or practitioner worth their salt should be *very* explicit about the scope of their assessment.

- That scope should exhaustively detail what was and wasn't tested.

- There should be proof of the work product, and an intelligible summary of why, how, and when an assessment was done.

- They should give you what you need to have confidence in *your understanding of* you security posture as well as evidence that you *have* a security posture you can prove with facts and data.

Anybody who tells you not to worry and take their word for something should be viewed with extreme skepticism. It is a completely unacceptable frame of mind when you're legally and ethically responsible for things you're stewarding for other people.

vlovich123 1/1/2026|||
Since it’s self hosted the sandboxing aspect at the language/runtime level probably matters just a little bit less.
ZiiS 1/1/2026|||
I think this is, sandboxed so your debugging didn't need to consider interactions, not sandboxes so you can run untrusted code.
andrewaylett 1/1/2026||
Cloudflare needs to worry about their sandbox, because they are running your code and you might be malicious. You have less reason to worry: if you want to do something malicious to the box your worker code is running on, you already have access (because you're self-hosting) and don't need a sandbox escape.
AgentME 1/1/2026||
Automatically running LLM-written code (where the LLM might be naively picking a malicious library to use, is poisoned by malicious context from the internet, or wrongly thinks it should reconfigure the host system it's executing code on) is an increasingly popular use-case where sandboxing is important.
andrewaylett 1/2/2026||
That scenario is harder to distinguish from the adversarial case that public hosts like Cloudflare serve. I don't think it's unreasonable to say that a project like OpenWorkers can be useful without meeting the needs of that particular use-case.
tbrockman 1/1/2026||
Cool project, great work!

Forgive the uninformed questions, but given that `workerd` (https://github.com/cloudflare/workerd) is "open-source" (in terms of the runtime itself, less so the deployment model), is the main distinction here that OpenWorkers provides a complete environment? Any notable differences between the respective runtimes themselves? Is the intention to ever provide a managed offering for scalability/enterprise features, or primarily focus on enabling self-hosting for DIYers?

max_lt 1/1/2026|
Thanks! Main differences: 1. Complete stack: workerd is just the runtime. OpenWorkers includes the full platform – dashboard, API, scheduler, logs, and self-hostable bindings (KV, S3/R2, Postgres). 2. Runtime: workerd uses Cloudflare's C++ codebase, OpenWorkers is Rust + rusty_v8. Simpler, easier to hack on. 3. Managed offering: Yes, there's already one at dash.openworkers.com – free tier available. But self-hosting is a first-class citizen.
csomar 1/2/2026||
Question: Do you support WASM workers? How does the deployment experience compared to Wrangler? If I have a wasm worker and only use KV, how identical will be the deployed worker to that of Cloudflare?
max_lt 1/2/2026||
WASM is supported, V8 handles it natively. Tested it briefly, works, but not user-friendly at all yet.

OpenWorkers CLI is in development. We're at the pre-wrangler stage honestly. Dashboard or API for now, wrangler-style DX with Github/GitLab integration is the goal.

indigodaddy 1/1/2026||
Perhaps it might be helpful to some to also lay out the things that don't work today (or eg roadmap of what's being worked on that doesn't currently work?). Anyway, looks very cool!
max_lt 1/1/2026||
Good idea! Main things not yet implemented: Durable Objects, WebSockets, HTMLRewriter, and cache API. Next priority is execution recording/replay for debugging. I'll add a roadmap section to the docs.
yuhhgka 1/1/2026||
[flagged]
kachapopopow 1/1/2026||
I see anything that reduces the relience on vendor lock-in I upvote. Hopefully cloud services see mass exodus so they have to have reasonable pricing that actually reflects their costs instead of charging more than free for basic services like NAT.

Cloud services are actually really nice and convenient if you were to ignore the eye watering cost versus DIY.

rozenmd 1/1/2026||
Probably worth pointing out that the Cloudflare Workers runtime is already open source: https://github.com/cloudflare/workerd
max_lt 1/1/2026||
True, workerd is open source. But the bindings (KV, R2, D1, Queues, etc.) aren't – they're Cloudflare's proprietary services. OpenWorkers includes open source bindings you can self-host.
buremba 1/1/2026||
I tried to run it locally some time ago, but it's buggy as hell when self-hosted. It's not even worth trying out given that CF itself doesn't suggest it.
ketanhwr 1/1/2026||
I'm curious what bugs you encountered. workerd does power the local runtime when you test CF workers in dev via wrangler, so we don't really expect/want it to be buggy..
buremba 1/4/2026||
There is a big "WARNING: This is a beta. Work in progress" message in https://github.com/cloudflare/workerd

Specifically, half of the services operate locally, and the other half require CF services. I mainly use Claude Code to develop, and it often struggles to replicate the local environment, so I had to create another worker in CF for my local development.

Initially, the idea was to use CF for my side projects as it's much easier than K8S, but after wrestling with it for a month, decided that it's not really worth investing that much, and I moved back to using K8S with FluxCD instead, even though it's overkill as well.

kentonv 1/4/2026||
> There is a big "WARNING: This is a beta. Work in progress"

Ughhhh that is because nobody ever looks at the readme so it hasn't been updated basically since workerd was originally released. Sorry. I should really fix that.

> Specifically, half of the services operate locally, and the other half require CF services.

workerd itself is a runtime for Workers and Durable Objects, but is not intended to provide implementations of other services like KV, D1, etc. Wrangler / miniflare provides implementations of most of these for local testing purposes, but these aren't really meant for production.

But workers + DO alone is enough to do a whole lot of things...

buremba 1/4/2026||
Thanks a ton for the quick response! I totally get that workerd is not intended to be the emulator of all CF services, but the fact that I will still need an external dependency for local development, and the code I developed can't be used outside of CF environment, makes me feel like I'm locked in to the environment.

I'm mostly using terminal agents to write and deploy code. I made a silly mistake, not reviewing the code before merging it into main (side project, zero user), and my durable object alarms got into an infinite loop, and I got a $400 bill in an hour. There was no way to set rate limits for AI binding in workers, and I didn't get any notification, so I created a support ticket 2 months ago, which hasn't answered to this date.

That was enough for me to move out of CF as a long-time user (>10 years) and believer (CF is still one of my biggest stocks). In a world where AI writes most of the code, it's scary to have the requirement to deploy to a cloud that doesn't have any way to set rate limits.

I learned the hard way that I must use AI Gateway in this situation, but authentication is harder with it, and agents prefer embedded auth, which makes it pick AI binding over AI Gateway. With K8S, it's not easy to maintain, but at least I can fully control the costs without worrying about cost of experimentation.

geek_at 1/1/2026|||
I'm worrying that the increasing ram prices will drive more people away from local and more to cloud services because if the big companies are buying up all the resources it might not be feasible to self host in a few years
kachapopopow 1/1/2026||
the pricing is so insane it will always be cheaper to self host by 100x, that's how bad it is.
dijit 1/1/2026|||
not 100x.

10% is the number I ordinarily see, counting for members of staff and adequate DR systems.

If we had paid our IT teams half of what we pay a cloud provider, we would have had better internal processes.

Instead we starved them and the cloud providers successfully weaponised extremely short term thinking against us, now barely anyone has the competence to actually manifest those cost benefits without serious instability.

kachapopopow 1/1/2026||
I genuinely mean that, fly.io (although as unreliable as it might seem) is already around ~5x to 10x cheaper depending on use case, depending on some services it's actually <infinity> times cheaper because it's actually completely free when you self host!

GCP pricing is absolutely wicked when they charge $120/month for 4vcore 16gb ram, can get around 23 times more performance and 192gb ram for $350/month with Xtbps-ish ddos protection.

I have 2 2x7742 1tb ram each, 3 9950x3ds 192gb ecc, 2 7950x3d's all at <$600/month obv the original hardware cost in the realm of $60k - the epyc cpu's were bought used for around $1k each so not a bad deal, same with ram overall the true cost is <20k. This is entirely for personal use and will last me more than a decade most likely unless there are major gains in efficiency and power cost continues to grow due to AI demand. This also includes 100tb+ hdd of storage and 40tb of nvme storage all connected with 100gbps switch pair for redundancy for a cheap cheap price of $500 for each switch.

I guess I owe some links: (Ignore minecraft focused branding)

https://pufferfish.host/ (also offers colocation)

telegram: @Erikb_9gigsofram direct colocation at datacenter (no middlemen / sales) + good low cost bundle deal

anti-ddos: https://cosmicguard.com/ (might still offer colocation?)

anti-ddos: https://tcpshield.com/

Imustaskforhelp 1/1/2026|||
Wait what? can you show me some sources to back this up? I assume you are exaggerating but still, what would be the definition of cheap is interesting to know.

I don't think after the fact that ram prices spiked 4-5x that its gonna be cheaper to self host by 100x, Like hetzner's or ovh's cloud offerings are cheap

Plus you have to put a lot of money and then still pay for something like colocation if you are competing with them

Even if you aren't, I think that the models are different. They are models of monthly subscription whereas in hardware, you have to purchase it.

It would be interesting tho to compare hardware-as-a-service or similar as well but I don't know if I see them for individual stuff.

andruby 1/1/2026||
100x is probably hyperbole. 37 signals saved between 50 and 66% in hosting costs when moving from cloud to self hosted.

https://basecamp.com/cloud-exit

victorbjorklund 1/1/2026|||
But they have scale. A small company will save less because it’s not that much more work to handle say a 100 node kubernetes cluster vs a 10 node kubernetes cluster.
shimman 1/1/2026|||
Self hosting nowadays is way way way way easier than you're thinking. I'm involved working with various political campaigns and the first thing I help every team do is provision a 10 year old laptop, flash linux, and setup a DDNS. A $100 investment is more than enough for a campaign of 10-20ish dedicated workers that will only be hitting this system one/two users at a time. If I can teach a random 70 year old retiree or 16 year old on how to type a dozen different commands, I'm sure a paid professional can learn too.

People need to realize that when you selfhost you can choose to follow physical business constraints. If no one is in the office to turn on a computer, you're good. Also consumer hardware is so powerful (even 10 year old hardware) that can easily handle 100k monthly active users, which is barely 3k daily users, and I doubt most SMBs actually need to handle anything beyond 500 concurrent users hardware wise. So if that's the choice it comes down to writing better and more performant software, which is what is lacking nowadays.

People don't realize how good modern tooling + hardware has come. You can get by with very little if you want.

I'd bet my years salary that a good 40% of AWS customers could probably be fine with a single self hosted server using basic plug in play FOSS software on consumer hardware.

People in our industry have been selling massive lies on the need for scalability, the amount of companies that require such scalability are quite small in reality. You don't need a rocket ship to walk 2 blocks, and it often feels like this is the case in our industry.

If self hosting is "too scary" for your business, you can buy a $10 VPS but after one single year you can probably find decade old hardware that is faster than what you pay for.

oldandboring 1/1/2026|||
I'm in your camp but I go for the cheap VPS. Lightsail and DigitalOcean are amazing -- for $10/mo or less you get a cheap little box that's essentially everything you describe, but with all the peace of mind that comes from not worrying about physical security, physical backups, dynamic IPs/DDNS, and running out of storage space. You're right that almost nobody needs most of the stuff that AWS/GCP/Azure can do, but some things in the cloud are worth paying for.
Imustaskforhelp 1/1/2026||
Yea absolutely this. This is what I was saying, so like having a vps for starting out definitely makes sense. Like, I think when it starts making sense to build your own cloud is around the 500-1000$ mark per month

I searched hetzner and honestly at just around the 500 mark (506.04) seeing it on their refurbished auction for sale, I can get around 1024 GB of ram AMD EPYC 7502 2 x 1.92 TB Datacenter SSD

In this ramflation imagine getting so much ram would cost a bank.

Like I love homelabbing too and I think that an old laptop sometimes can be enough for basic things to even more modern things but I did some calculations and colocrossing or the professional renting model or even buying new hardware model in this ramflation would probably not work.

It's sad but like, the only place it might make sense is if you can get yourself a good firewall and have an old laptop or server and will do something like this but even then I have heard it be described as not worth it by many but I think its an interesting experiment.

Also regarding the 1024 GB of ram's, holy.. I wonder how many programs need so much ram. I will still do homelabbing and things but like, y'know I am kinda hard pressed in how much we can recommend if ramflation is so much and that's when I saw someone originally writing saying 100x I really wondered how much is enough and at what scale or what others think

victorbjorklund 1/1/2026|||
Yea, but admit that I am right that it is not that much harder to manage 100 nodes vs 10 nodes. (At least you can agree you don’t need 10x more staff to manage 100 nodes instead of 10)

That’s the key. If you need one person or 3 persons doesn’t matter. The point is the salaries are fixed costs.

mystifyingpoi 1/1/2026|||
You are right, but it's a feature of Kubernetes actually. If you treat nodes as cattle, then it doesn't matter if there is 10 or 100 or 1000, as long as the apiserver can survive the load and upgrades don't take too long (though upgrades/maintenance can be done slowly for even days without any problems).

But all the stateful crap (like databases) gets trickier and harder the more machines you have.

shimman 1/1/2026|||
Ah sorry, I completely misread. You are right, and to add another dimension even when you choose to go to the cloud you still have to hire nearly the same amount of personal to deal with those tools. I've never worked at a software company that didn't have devs specifically to deal with cloud issues and integrations.
kachapopopow 1/1/2026|||
A small company benefits more than anyone since it's not rocket science to learn these things so you can just put on your system administrator hat once every few weeks, would not be ideal to lose that employee which is why I always suggest a couple of people picking up this very useful skill.

But I don't know much about how it is a real world and normal 9 to 5 I have taken up jobs from system administration to reverse engineering and to even making plugins and infrastructure for minecraft I generally only work these days when people don't have any other choice and need someone who is pretty good at everything so I am completely out of the loop.

victorbjorklund 1/2/2026||
It takes me almost equal time to manage a Kubernetes cluster with 10 nodes as with 100 nodes. If I have to spend say 5 hours per month and with a cost of say 100 usd/hour it means it cost 500 usd/month to manage. If leaving cloud saves say 100 usd/node from 200 usd/node it means for a small company its cost would be (10100)+500=1500 usd/month which is a cost reduction of 25%. For a large company it would be greater (100100)+500=10500 which means a 47.5% savings. Do you see why the savings are greater with scale?
kachapopopow 1/2/2026||
well bigger clusters have weird complexities and require specialized knowledge if you don't want your production to blow up every couple of months.

small clusters can be run with minimal knowledge which means the added cost is $0.

Imustaskforhelp 1/1/2026|||
Considering the fact that ramflation happened, and we assume the cost of hardware to be spread between 5 years, someone please run the numbers again.

It would be interesting to see the scale of basecamp. I just saw right now that hetzner offers 1024 GB of ram for around 500$

Um 37signals spent around 700k$ I think on servers so if someone has this much amount of money floating around, perhaps.

Yea I looked at their numbers and they mentioned a 1300$/month for just hardware for 1.3 TB and so hetzner might still make economically more sense somehow.

I think the problem for some of these is that they go too hard on the managed services and those are good sometimes as well but like, there are cheaper managed cloud than aws etc. as well (upcloud,ovh etc.) but at the end of the day, it's good to remember that if it bothers you financially, you can migrate.

Honestly do whatever you want. Start however you want because like these things definitely interest me (which is why I am here) but I think most compute providers have really gone the path of the bottom.

The problem usually feels to me when you are worried that you might break the term of service or anything similar if you are at scale or anything, not that this stops exactly being a problem with colo but that still brings more freedom

I think if one wants freedom, they can always contact some compute providers and find what can support their use case the best while still being economical. And then choose the best option from the multitude of available options.

Also vertical scaling is a beast.

I really went into learning a lot about cloud prices recently etc. so I want to ask a question but can you tell me more about the servers that 37signals brought or any other company you know of, I can probably create a list when it makes sense and when it doesn't perhaps and the best options available in markets.

andruby 1/6/2026||
They went for Dell servers: https://world.hey.com/dhh/the-hardware-we-need-for-our-cloud...

Hardware with service contracts makes sense. You can probably get the hardware even cheaper is you build supermicro servers, but then you'll spend more time on hardware support.

Dell makes a ton of sense.

re-thc 1/1/2026||
> so they have to have reasonable pricing that actually reflects their costs instead of charging more than free for basic services like NAT

How is the cost of NAT free?

> Cloud services are actually really nice and convenient if you were to ignore the eye watering cost versus DIY.

I don't doubt clouds are expensive, but in many countries it'd cost more to DIY for a proper business. Running a service isn't just running the install command. Having a team to maintain and monitor services is already expensive.

nijave 1/2/2026|||
Presumably they're talking about the egregious price of NAT on AWS.

It's next to free self hosting considering even the crappiest consumer router has hardware accelerated NAT and takes a tiny amount of power. You likely already have the hardware and power since you need routing and potentially other network services

re-thc 1/2/2026||
> Presumably they're talking about the egregious price of NAT on AWS.

Maybe. I agree AWS is over-priced. However it shouldn't be "free".

> It's next to free self hosting considering even the crappiest consumer router

That's not the same product / service is it? We're discussing networking products and this "crappiest" consumer router wouldn't even push real world 100m of packets.

kachapopopow 1/1/2026||||
salesforce had their hosting bill jump orders of magnitude after ditching their colocation, it did not save anything and colocation staff were replaced with AWS engineers

nat is free to provide because the infrastructure to have NAT is already there and there is never anything maxing out a switch cluster(most switches sit at ~1% usage since they're overspeced $1,000,000 switches), so other than host CPU time managing interrupts (which is unlikely since all network cards offload this).

sure you could argue that regional NAT might should be priced, but these companies have so much fiber between their datacenters that all of nat usage is probably a rounding error.

pyvpx 1/2/2026||
NAT is a stateful network function and incredibly complex to implement efficiently. NAT is never free.
kachapopopow 1/2/2026||
it's already there and fully supported and accelerated by switches and connected hardware, switches like juniper do have licensing fees to use such features, but a company like AWS can surely work around these licensing costs and build an in-house solution.
re-thc 1/2/2026||
> it's already there

So it should be free? The bank already has "money". It's already there so you can take it?

That's not how it works.

Do you not get a managed service where someone upgrades it, deals with outages etc? Are those people that work 24/7 free or is it another "already there"?

kachapopopow 1/2/2026||
fair point, but the pricing of NATs is so low that it would actually take more effort to create billing for it than to just have it be free, it's clearly a choice to maximize profits for every single resource regardless of complexity or cost - that is my problem.

And there are things that come for free when you have instrastructure this big and expansive - one-time configuration and you either monetize it or pass down the savings and since every cloud service is in agreement that profits should be maximized you end up with cloud providers which have massive datacenters at very cheap cost due to economies of scale providing it at a value far exceeding normal hosting practices due to their ability to monopolize and spend vasts amount of money onboarding businesses with false promises which errodes the infrastructure for non-cloud solutions and makes cloud providers the only choice for any business as the talent and software ends up going into maintenance mode and/or turns towards higher profitability to keep themselves afloat.

otterley 1/1/2026|||
They said “charging more than free” - i.e., more than $0, i.e., they’re not free. It was awkwardly worded.
re-thc 1/1/2026||
They said "instead of charging more than free", which means should be free.

Please read it again.

otterley 1/1/2026||
I think we’re in violent agreement, but you were ambiguous about what “cost” meant. It seems you meant “cost of providing NAT” but I interpreted it as “cost to the customer.”

> Please read it again.

There’s no need to be rude.

mmastrac 1/1/2026||
I did a huge chunk of work to split deno_core from deno a few years back and TBH I don't blame you from moving to raw rusty_v8. There was a _lot_ of legacy code in deno_core that was challenging to remove because touching a lot of the code would break random downstream tests in deno constantly.
max_lt 1/1/2026|
Thanks for that work! deno_core is a beautiful piece of work and is still an option for OpenWorkers: https://github.com/openworkers/openworkers-runtime-deno

  We maintained it until we introduced bindings — at that point, we wanted more fine-grained control over the runtime internals, so we moved to raw rusty_v8 to iterate faster. We'll probably circle back and add the missing pieces to the deno runtime at some point.
j1elo 1/1/2026||
To the author: The ASCII-art Architecture diagram is very broken, at least on my Pixel phone with Firefox.

These kinds of text-based diagrams are appealing for us techies, but in the end I learned that they are less practical. My suggestion is to use an image, and think of the text-based version as the "source code" which you keep, meanwile what gets published is the output of "compiling" it into something that is for sure always viewable without mistake (that one is where we tend to miss it with ascii-art).

max_lt 1/1/2026||
Thanks for the heads up! Fixed – added a simplified ASCII version for mobile.
j1elo 1/1/2026||
Thanks! Now I can make more sense of it! Very cool project by the way, thanks for posting it
vishnugupta 1/1/2026||
Rendered perfectly on my iPhone 11 Safari.
simlevesque 1/1/2026||
That's why we need to test websites on multiple browsers.
willtemperley 1/2/2026||
I've decided to ditch CF because Wrangler is deployed via NPM and I cannot bear NodeJS and Microsoft NPM anymore.

I get the impression this can't be run without NodeJS right now?

max_lt 1/2/2026|
Conclusion shared. No Node required — the runtime is pure Rust + V8. The only transformation we do is transpilation for TS code.
keepamovin 1/1/2026||
Technically, and architecturally this is excellent. It’s also an excellent product idea. And I’m particularly a fan of the big-ass-vendor-inversion-model where instead of the big ass vendor ripping off an open source project and monetizing it, you look at one of their projects and you rip it off inversely and open source it — this is the way.
valdair3d 1/2/2026|
Self-hosted workers are becoming critical infrastructure for AI agent workloads. When you're running agents that need to interact with external services - web scraping, API orchestration, browser automation - you hit Cloudflare's execution limits fast. The 30s CPU time on the free tier and even the 15min on paid plans don't work for long-running agent tasks.

The isolation model here is interesting. For agents that need to handle untrusted input (processing user URLs, parsing arbitrary documents), V8 isolates give you a security boundary that's much lighter than full container isolation. But you trade off the ability to do things like spawn subprocesses or access the filesystem.

Curious about the persistence story. Most agent workflows need some form of state between invocations - conversation history, task progress, cached auth tokens. Is there a built-in KV store or does this expect external storage?

max_lt 1/2/2026|
Good use case. For state between invocations, we have KV (key-value with TTL), Storage (S3) and DB bindings (Postgres). Durable Objects not yet but it's on the roadmap.

Wall-clock timeout is configurable (default 30s), CPU limits too. We haven't prioritized long-running tasks or WebSockets yet, but shouldn't be hard to add.

valdair3d 1/6/2026||
nice, KV + Postgres covers most of our use cases. the TTL on KV is useful for caching auth tokens between invocations without worrying about cleanup.

for long-running tasks we've been using a queue pattern anyway - worker picks up task, does a chunk, writes state to KV, exits. next invocation picks up where it left off. works around timeout limits and handles retries gracefully. websockets would be nice for real-time feedback but polling works fine for now.

will keep an eye on the durable objects progress. that's the main thing missing for stateful agent workflows where you need guaranteed delivery.

More comments...