Top
Best
New

Posted by max_lt 1/1/2026

Show HN: OpenWorkers – Self-hosted Cloudflare workers in Rust(openworkers.com)
I've been working on this for some time now, starting with vm2, then deno-core for 2 years, and recently rewrote it on rusty_v8 with Claude's help.

OpenWorkers lets you run untrusted JS in V8 isolates on your own infrastructure. Same DX as Cloudflare Workers, no vendor lock-in.

What works today: fetch, KV, Postgres bindings, S3/R2, cron scheduling, crypto.subtle.

Self-hosting is a single docker-compose file + Postgres.

Would love feedback on the architecture and what feature you'd want next.

500 points | 158 commentspage 2
abalashov 1/1/2026|
What if we hosted the cloud... on our own computers?

I see we have entered that phase in the ebb and flow of cloud vs. self-hosting. I'm seeing lots of echoes of this everywhere, epitomised by talks like this:

https://youtu.be/tWz4Eqh9USc

locknitpicker 1/1/2026||
> What if we hosted the cloud... on our own computers?

The value proposition of function-as-a-service offerings is not "cloud" buzzwords, but providing an event-handling framework where developers can focus on implementing event handlers that are triggered by specific events.

FaaS frameworks are the high-level counterpart of the low-pevel message brokers+web services/background tasks.

Once you include queues in the list of primitives, durable executions are another step in that direction.

If you have any experience developing and maintaining web services, you'll understand that API work is largely comprised of writing boilerplate code, controller actions, and background tasks. FaaS frameworks abstract away the boilerplate work.

nine_k 1/1/2026||
It won't be a... cloud?

To me, the principal differentiator is the elasticity. I start and retire instances according to my needs, and only pay for the resources I've actually consumed. This is only possible on a very large shared pool of resources, where spikes of use even out somehow.

If I host everything myself, the cloud-like deployment tools simplify my life, but I still pay the full price for my rented / colocated server. This makes sense when my load is reasonably even and predictable. This also makes sense when it's my home NAS or media server anyway.

(It is similar to using a bus vs owning a van.)

rcarmo 1/2/2026||
It will be a very small cloud.
byyll 1/1/2026||
Isn't the whole point of Cloudflare's Workers to pay per function? If it is self-hosted, you must dedicate hardware in advance, even if it's rented in the cloud.
shimman 1/1/2026|
Many companies run selfhosted servers in data centers still need to run software on top of this. Not every company needs to pay people to do things they are capable themselves.

Having options that mimic paid services is a good thing and helps with adoptability.

victorbjorklund 1/1/2026||
Cool. I always liked CF workers but haven’t shipped anything serious with it due to not wanting vendor lock-in. This is perfect for knowing you always got a escape hatch.
orliesaurus 1/1/2026||
Good to see this! Cloudflare's cool, but those locked-in things (KV, D1, etc.) always made it hard to switch. Offering open-source alternatives is always good, but maintainign them is on the community. Even without super-secure multi-tenancy, being able to run the same code on your own stuff or a small VPS without changing the storage is a huge dev experience boost.
mariopt 1/2/2026||
Amazing work!

I have been thinking exactly about this. CF Workers are nice but the vendor lock-in is a massive issue mid to long term. Bringing D1 makes a lot of sense for web apps via libSql (SQLite with read/write replicas).

Do you intended to work with the current wrangler file format? Does this currently work with Hono.js with the cloudflare connector>

max_lt 1/2/2026|
Wrangler file format: not planned. We're taking a different approach for config but we intend to be compatible with Cloudflare adapters (SvelteKit, Astro, etc). Assets binding already has the same API. We just need to support _routes.json and add static file routing on top of workers, data model is ready for it.

For D1: our DB binding is Postgres-based, so the API differs slightly. Same idea, different backend.

Hono should just work, it just needs a manual build step and copy paste for now. We will soon host OpenWorkers dashboard and API (Hono) directly on the runner (just some plumbing to do at this point).

mariopt 1/2/2026||
I think it would be worth it to keep the D1 compatibility, Sqlite and Postgres have different SQL dialects. Cloudflare has Hyperdrive to keep the connection alive to Postgres/other dbs, what D1/libSql/Turso brings to the table is the ability to run a read/write replica in the machine, this can dramatically reduce the latency.
strangescript 1/1/2026||
Cool project, but I never found the cloudflare DX desirable compared to self hosted alternatives. A plain old node server in a docker container was much easier to manage, use and is scalable. Cloudflare's system was just a hoop that you needed to jump through to get to the other nice to haves in their cloud.
skybrian 1/1/2026|
Would it be useful for testing apps that you're going to deploy on Cloudflare anyway?
nextaccountic 1/1/2026||
Any reason to abandon Deno?

edit: if the idea was to have compatibility with cloudflare workers, workers can run deno https://docs.deno.com/examples/cloudflare_workers_tutorial/

max_lt 1/1/2026|
Deno core is great and I didn't really abandon Deno – we support 5 runtimes actually, and Deno is the second most advanced one (https://github.com/openworkers/openworkers-runtime-deno). It broke a few weeks ago when I added the new bindings system and I haven't had time to fix it yet. Focused on shipping bindings fast with the V8 runtime. Will get back to Deno support soon.
vmg12 1/1/2026||
Does this actually use the cloudflare worker runtime or is this just a way to run code in v8 isolates?
max_lt 1/1/2026|
It's a custom V8 runtime built with rusty_v8, not the actual Cloudflare runtime (github.com/openworkers/openworkers-runtime-v8). The goal is API compatibility – same Worker syntax (fetch handler, Request/Response, etc.) so you can migrate code easily. Under the hood it's completely independent.
brainless 1/2/2026||
I am always interested in seeing alternatives in the edge compute space but self hosting does not make sense to me.

The benefit of edge is the availability close to customers. Unless I run many servers, it is simply easier to run one server instead of "edge".

IntelliAvatar 1/2/2026|
Nice project.

One thing Cloudflare Workers gets right is strong execution isolation. When self-hosting, what’s the failure model if user code misbehaves? Is there any runtime-level guardrail or tracing for side-effects?

Asking because execution is usually where things go sideways.

max_lt 1/2/2026|
Workers that hit limits (CPU, memory, wall-clock) get terminated cleanly with a clear reason. Exceptions are caught with stack traces (at least it should lol), logs stream in real-time.

What's next: execution recording. Every invocation captures a trace: request, binding calls, timing. Replay locally or hand it to an AI debugger. No more "works on my machine".

I think the CLI will look like:

# Replay a recorded execution:

openworkers replay --execution-id abc123

# Replay with updated code, compare behavior:

openworkers replay --execution-id abc123 --worker ./dist/my-fix.js

Production bug -> replay -> AI fix -> verified -> deployed. That's what I have in mind.

IntelliAvatar 1/2/2026|||
This makes a lot of sense. Recording execution + replay is exactly what’s missing once you move past simple logging.

One thing I’ve found tricky in similar setups is making sure the trace is captured before side-effects happen, otherwise replay can lie to you. If you get that boundary right, the prod → replay → fix → verify loop becomes much more reliable.

Really like the direction.

More comments...