Top
Best
New

Posted by a24venka 9 hours ago

Launch HN: Spine Swarm (YC S23) – AI agents that collaborate on a visual canvas(www.getspine.ai)
Hey HN! We're Ashwin and Akshay from Spine AI (https://www.getspine.ai). Spine Swarm is a multi-agent system that works on an infinite visual canvas to complete complex non-coding projects: competitive analysis, financial modeling, SEO audits, pitch decks, interactive prototypes, and more. Here's a video of it in action: https://www.youtube.com/watch?v=R_2-ggpZz0Q.

We've been friends for over 13 years. We took our first ML course together at NTU, in a part of campus called North Spine, which is where the name comes from. We went through YC in S23 and have spent about 3 years building Spine across many product iterations.

The core idea: chat is the wrong interface for complex AI work. It's a linear thread, and real projects aren't linear. Sure, you can ask a chatbot to reference the financial model from earlier in the thread, or run research and market sizing together, but you're trusting the model to juggle that context implicitly. There's no way to see how it's connecting the pieces, no way to correct one step without rerunning everything, and no way to branch off and explore two strategies side by side. ChatGPT was a demo that blew up, and chat stuck around as the default interface, not because it's the right abstraction. We thought humans and agents needed a real workspace where the structure of the work is explicit and user-controllable, not hidden inside a context window.

So we built an infinite visual canvas where you think in blocks instead of threads. Each block is our abstraction on top of AI models. There are dedicated block types for LLM calls, image generation, web browsing, apps, slides, spreadsheets, and more. Think of them as Lego bricks for AI workflows: each one does something specific, but they can be snapped together and composed in many different ways. You can connect any block to any other block, and that connection guarantees the passing of context regardless of block type. The whole system is model-agnostic, so in a single workflow you can go from an OpenAI LLM call, to an image generation mode like Nano Banana Pro, to Claude generating an interactive app, each block using whatever model fits best. Multiple blocks can fan out from the same input, analyzing it in different ways with different models, then feed their outputs into a downstream block that synthesizes the results.

The first version of the canvas was fully manual. Users entered prompts, chose models, ran blocks, and made connections themselves. It clicked with founders and product managers because they could branch in different directions from the same starting point: take a product idea and generate a prototype in one branch, a PRD in another, a competitive critique in a third, and a pitch deck in a fourth, all sharing the same upstream context. But new users didn't want to learn the interface. They kept asking us to build a chat layer that would generate and connect blocks on their behalf, to replicate the way we were using the tool. So we built that, and in doing so discovered something we didn't expect: the agents were capable of running autonomously for hours, producing complete deliverables. It turned out agents could run longer and keep their context windows clean by delegating work to blocks and storing intermediary context on the canvas, rather than holding everything in a single context window.

Here's how it works now. When you submit a task, a central orchestrator decomposes it into subtasks and delegates each to specialized persona agents. These agents operate on the canvas blocks and can override default settings, primarily the model and prompt, to fit each subtask. Agents pick the best model for each block and sometimes run the same block with multiple models to compare and synthesize outputs. Multiple agents work in parallel when their subtasks don't have dependencies, and downstream agents automatically receive context from upstream work. The user doesn't configure any of this. You can also dispatch multiple tasks at once and the system will queue dependent ones or start independent ones immediately.

Agents aren't fully autonomous by default. Any agent can pause execution and ask the user for clarification or feedback before continuing, which keeps the human in the loop where it matters. And once agents have produced output, you can select a subset of blocks on the canvas and iterate on them through the chat without rerunning the entire workflow.

The canvas gives agents something that filesystems and message-passing don't: a persistent, structured representation of the entire project that any agent can read and contribute to at any point. In typical multi-agent systems, context degrades as it passes between agents. The canvas addresses this because agents store intermediary results in blocks rather than trying to hold everything in memory, and they leave explicit structured handoffs designed to be consumed efficiently by the next agent in the chain. Every step is also fully auditable, so you can trace exactly how each agent arrived at its conclusions.

We ran benchmarks to validate what we were seeing. On Google DeepMind's DeepSearchQA, which is 900 questions spanning 17 fields, each structured as a causal chain where each step depends on completing the previous one, Spine Swarm scored 87.6% on the full dataset with zero human intervention. For the benchmark we used a subset of block types relevant to the questions (LLM calls, web browsing, table) and removed irrelevant ones like document, spreadsheet, and slide generation. We also disabled human clarification so agents ran fully independently. The agents were not just auditable but also state of the art. The auditability also exposed actual errors in an older benchmark (GAIA Level 3), cases where the expected answer was wrong or ambiguous, which you'd never catch with a black-box pipeline. We detail the methodology, architecture, and benchmark errors in the full writeup: https://blog.getspine.ai/spine-swarm-hits-1-on-gaia-level-3-...

Benchmarks measure accuracy on closed-ended questions. Turns out the same architecture also leads to better open-ended outputs like decks, reports, and prototypes with minimal supervision. We've seen early users split into two camps: some watch the agents work and jump in to redirect mid-flow, others queue a task and come back to a finished deliverable. Both work because the canvas preserves the full chain of work, so you can audit or intervene whenever you want.

A good first task to try: give it your website URL and ask for a full SEO analysis, competitive landscape, and a prioritized growth roadmap with a slide deck. You'll see multiple agents spin up on the canvas simultaneously. People have also used it for fundraising pitch decks with financial models, prototyping features from screenshots and PRDs, competitive analysis reports and deep-dive learning plans that research a topic from multiple angles and produce structured material you can explore further.

Pricing is usage-based credits tied to block usage and the underlying models used. Agents tend to use more credits than manual workflows because they're tuned to get you the best possible outcome, which means they pick the best blocks and do more work. Details here: https://www.getspine.ai/pricing. There's a free tier, and one honest caveat: we sized it to let you try a real task, but tasks vary in complexity. If you run out before you've had a proper chance to explore, email us at founders@getspine.ai and we'll work with you.

We'd love your feedback on the experience: what worked, what didn't, and where it fell short. We're also curious how others here approach complex, multi-step AI work beyond coding. What tools are you using, and what breaks first? We'll be in the comments all day.

75 points | 60 commentspage 3
jpbryan 9 hours ago|
Why do I need a canvas to visualize the work that the agents are doing? I don't want to see their thought process, I just want the end product like how ChatGPT or Claude currently work.
a24venka 8 hours ago|
That is definitely a valid way of using Spine as well. You can just work in the chat and consume the deliverables similar to how you would in other tools.

The canvas helps when you want to trace back why an output wasn't what you expected, or if you're curious to dig deeper.

Even beyond auditability, the canvas also helps agents do better work: they can generate in parallel, explore branches, and pass context to each other in a structured way (especially useful for longer-running tasks).

avree 5 hours ago||
I got dizzy from the star effect when scrolling the website.
gravity2060 8 hours ago||
Is it possible to build self-improving swarm loops? (ie swarm x builds a thing, swarm y critiques and improved x’s work, repeat…)
a24venka 8 hours ago|
We've only partially explored this so far, but it's a great suggestion.

The canvas architecture naturally supports this kind of loop since agents can already read and build on each other's outputs — so the plumbing is there, it's more about building the right orchestration on top. Definitely something we're exploring.

visekr 7 hours ago||
whoa congrats on the launch. lol I launched my visual canvas for agents today too. I went in a more of a collaborative canvas IDE, agent orchestration direction. But very cool to see your take on it

https://getmesa.dev is mine

embedding-shape 7 hours ago||
Rather than just finding a way to link your own product, why don't you do the rest of us favor and provide a comparison at least, so it becomes a tiny bit informative instead of just spammy?

Nothing wrong with sharing your own stuff, but at least contribute something back to the submission you're commenting on.

poly2it 7 hours ago||
It looks interesting, but is it really more efficient than a tiling window manager?
dude250711 8 hours ago||
Dark UI pattern: pretends that it is immediately usable only to redirect for sign-up.
a24venka 8 hours ago|
Fair point, we should be more upfront about the sign-up step. Given that tasks are long-running and token-intensive, we do need an auth barrier to protect against abuse, but we can definitely do a better job signaling that before you hit the canvas.
garciasn 8 hours ago||
Or, just show us in an animated GIF how the product works in practice. Then, should we somehow find benefit in a visual representation of a swarm's workflow, we could sign up rather than having to, unintuitively, scroll down to watch a YouTube video.

e: 'be' to 'we'; oops.

a24venka 8 hours ago||
Good call and noted. We're working on making the product experience more visible upfront.
esafak 8 hours ago||
Is the value prop that I can see what the agent is doing? This is not the way: https://youtu.be/R_2-ggpZz0Q?t=158

How am I supposed to get anything out of this? Consider that agents are going to get faster and run more and more tasks in parallel. This is not manageable for a human to follow in real time. I can barely keep up with one agent in real-time, let alone a swarm.

What I could see being useful is if you monitored the agents and notified me when one is in the middle of something that deserves my attention.

a24venka 8 hours ago|
This is a fair point, we are exploring progressive disclosure on the canvas to better utilize the space and make the key artifacts more readily visible. We do have other panels (the chat, task and deliverable) that have alternate views of what the agent did and the key deliverables.

Beyond human auditability, the canvas helps the agents do a better job by generating in parallel, exploring branches and passing context to each other in a structured way.

socialinteldev 6 hours ago||
[dead]
bhekanik 9 hours ago||
[dead]
levelsofself 4 hours ago||
We run 13 AI agents in production on a $24/month VPS. Key things we learned: 1) File-based memory beats databases for LLM ops (portability, readability, no ORM). 2) Preflight checks on every edit prevent 90% of incidents. 3) Hash-chained audit logs are cheap insurance. 4) Ollama for chat responses, Claude for complex tasks - saves 95% on API costs. Open sourced the governance layer: https://github.com/levelsofself/mcp-nervous-system
agenticbtcio 7 hours ago|
[flagged]
a24venka 6 hours ago||
Spot on. The persistence layer is a huge part of what makes the canvas work.

For failures, we handle it at multiple levels: first, standard retries and fallbacks to alternate models/providers. If that fails, the agents look for alternate approaches to accomplish the same task (e.g. falling back to web search instead of browser use).

For completeness, you can also manually re-run or edit individual blocks if they fail (though the agents may or may not consider this depending on where they are in their flow).

myrak 5 hours ago||
[flagged]
More comments...