As I was reading this I was a bit confused by the issues they mention, but at work I use Claude SSHed to a persistent dev server and I’d be annoyed if I didn’t have eg my git repos there all the time or any part of that workflow was ephemeral. I’m not really aware of what everyone else is doing with sandboxes etc.
But the bit at the end with the MDM server made it click for me. I’ve started generating tiny iOS apps for personal software stuff, because they solve data storage better than the web (at least on iOS). A database on some other server seems like a bad fit/overkill for this stuff, client side storage is too flaky because Safari. But iOS apps are limiting in their own annoying ways compared to web apps.
This looks like a really interesting solution, I can just store the data on a sprite with SQLite or whatever. Visit its URL to use my app, then does it go away on its own after a short time? I could have done that before with a server with storage, but this seems easier/probably cheaper.
If this works well/the way I’m hoping it might be the sweet spot for simple personal software that needs persistent data and you want to run anywhere.
One feature that would make this really nice is if it could have something like Vercel preview environments, where I need to auth my fly account to view the URL. That'd solve the public URL without me needing to do my own auth thing in every app.
In terms of actually making the app, I don't know Swift or iOS at all so it's all generated. Usual caveats, and I'm only running them on my own phone. I ask Claude (not code) to help me with the spec, I give it some bullet points and it asks a bunch of clarifying questions then gives me a spec. I put that in a new directory, fire up Claude and use the ralph-loop plugin (https://github.com/anthropics/claude-code/tree/main/plugins/...):
> /ralph-loop:ralph-loop "Implement the iOS app described in app-spec.md. You have access to xcode CLI tools. You should write tests and use them to verify your work. The task will be complete when the app is fully implemented, with all tests passing. Output <promise>COMPLETE</promise> when finished." --max-iterations 50 --completion-promise "COMPLETE"
Once it's done you can open the app in XCode, test it in a simulator, play with it and iterate a bit and then send it to your phone!
Editing to add because I can't edit the original post: I think the limiting factor here might be the concurrent sprites limit. It seems like if you're on pay-as-you-go then you can only have 3 running concurrently, and have to subscribe to get 10.
In particular, I'm really excited about the extremely fast start up time and checkpointing. I'm curious if anyone knows any alternatives in this space?
I had a few issues
1. manpath: can't set the locale; make sure $LC_* and $LANG are correct
suspect this is due to it inheriting locale from my local machine? easy to get around with some updates to .bashrc
2. the $SHELL environment in my sprite is `/opt/homebrew/bin/fish` I use fish on my local (mac + homebrew) machine and it seems to have inherited from my local machine, its nice to be using fish in the sprite, but seems weird that $SHELL in the sprite points to non-existent path. Slightly concerning that a local env var is being transferred to a remote machine without my explicit permission, I have some sensitive env vars locally.
var envVars []string
shellEnvVars := []string{
"BASH_VERSION",
"ZSH_VERSION",
"FISH_VERSION",
"KSH_VERSION",
"tcsh",
"SHELL",
}
It's also reading terminfo. It's not handling absolute paths to shells properly, though.If you want to skip this, running `sprite exec -tty /bin/bash --login` or similar avoids the magic.
I've been thinking a lot about how to run agents (and skills) securely while giving them a lot of powerful capabilities.
I recently used their macaroons library to turn arbitrary API keys (e.g. for stripe's API) into macaroons. I route requests for an upstream host (like stripe) through Envoy as a mitm proxy which injects the real creds after verifying the macaroon.
It is such a powerful pattern. I'm always worried about leaking sensitive keys through prompt injection attacks (or just sending them to anthropic), but in this model you can attenuate the keys (both capabilities & validity window) client side. The Envoy proxy lives inside my flycast network so it can't be accessed externally.
It would be so cool if fly built something like this into sprites.dev (though I can see how it would be spooky to have fly install their own certs for stripe, etc...)
Tokenizer is an explicit proxy though right?
My use case is very similar, but I wanted a transparent proxy so I could run unmodified scripts. It is a tricky design decision though.
I also mount a little fuse filesystem that mints macaroon on read (with a shorter lifetime, probably inspired by y'all but i forget from where).
I work on realtime collaboration of markdown files (currently in Obsidian), which has become a shared-context substrate for agents, skills, etc.. Our own company workspace has skills that have scoped access to fly, stripe, gmail, etc. We're definitely drinking the file-over-app personal-software-for-teams Kool-Aid, so the problem space for us includes access control and auditing.
Love your work :)
We can also attach Macaroons to Fly Machines and Sprites for configurable ambient privileges, something I've wanted us to expose as a feature for a very long time.
What is the contract with sprites? Is it just built-with-linux but not promising Linux? Or is it more like a machine but y'all control the container image?
I don't want to get too far into the rest of the details only because I'm writing this up for next week. They're not that interesting technically, but they're a really big deal for us in other ways.
Given their principled take on only trusting full-VM boundaries, I doubt they moved any of the storage stack into the untrusted VM.
So maybe a virtio-block device passing through discard to some underlying CoW storage stack, or maybe virtio-fs if it's running on ch instead of fc? Would be interesting to hear more about the underlying design choices and trade-offs.
Edit: from their website, "Since it's just ext4, you won't run into weird edge cases like you might with NFS or FUSE mounts. You can happily use shared memory files, for example, so you can run SQLite in all its modes." So it's a virtio block device supporting discard that's exposed to the VM. Interesting; fc doesn't support virtio discard passthrough, and support for ch is still in progress...
But maybe you have parts of the stack that don't need to be trusted inside the VM somehow? Looking forward to the article.
All the cool technical stuff aside - this, for me, was the standout line of the article
… yes? We have a few wrapper scripts around worktree operations that copy some docker volumes (pg data, bundle cache, etc.) from the base and spins up an entirely new stack on different ports with a host alias. We don’t have to install any deps beyond that because we copied over the ruby gems bundle cache and we’re using Yarn PnP + “zero installs” for client-side deps.
Maybe I’ve been isolated from The World for too long, but this sounds … unhealthy.
API downtime is a semi-frequent occurrence, as are transient API errors and slowness.
I've also had a ticket open with support for weeks due to rampant billing issues. For instance, a destroyed instance still shows up in my usage report as actively accruing billed time, and at a rate faster than is even possible (something like 2 hours for every 1 actual hour that has passed.)
They've released two new products in the AI space, this and Phoenix.new, and my worry is that they are focused on new products over making what they have good and reliable.