Posted by twapi 19 hours ago
Got the latest v0.3.8 version from the list here: https://api.darkbloom.dev/v1/releases/latest
Three binaries and a Python file: darkbloom (Rust)
eigeninference-enclave (Swift)
ffmpeg (from Homebrew, lol)
stt_server.py (a simple FastAPI speech-to-text server using mlx_audio).
The good parts: All three binaries are signed with a valid Apple Developer ID and have Hardened runtime enabled.
Bad parts: Binaries aren't notarized. Enrolls the device for remote MDM using micromdm. Downloads and installs a complete Python runtime from Cloudflare R2 (Supply chain risk). PT_DENY_ATTACH to make debugging harder. Collects device serial numbers.
TL;DR: No, not touching that.
I believe the idea was that people could submit big workloads, the server would slice them up and then have the clients download and run a small slice. You as the computer owner would then get some payout.
Intersting to see this coming back again.
I added Python execution support via Pyodide (cpython compiled to wasm) and worked on a bunch of other random stuff like WebLLM inferencing during my time there.
Apart from Distributive, there's also the "Golem network", "Salad", "Koii" and various other similar projects.
---
I'm not sure if I'm convinced by the "Uber for compute" use case with compute buyers and compute workers (sellers), but if you're a university and you have 1000 Windows machines across all your computer labs, it'd be nice to leverage that compute for running research or something idk - especially with the price of ram / cloud offerings these days...
We’ve been building something similar for image/video models for the past few months, and it’s made me think distribution might be the real bottleneck.
It’s proving difficult to get enough early usage to reach the point where the system becomes more interesting on its own.
Curious how others have approached that bootstrap problem. Thanks in advance.
But trying it out it still needs work, I couldn't download a model successfully (and their list of nodes at https://console.darkbloom.dev/providers suggests this is typical).
And as a cursory user, it took me some digging to find out that to cash out you need a Solana address (providers > earnings).
What could possibly go wrong?
When your Mac is idle (no inference requests), it consumes minimal power — you don't lose significant money waiting for requests. The electricity costs shown only apply during active inference.
Text models typically see the highest and most consistent demand. Image generation and transcription requests are bursty — high volume during peaks, quiet otherwise."