Top
Best
New

Posted by mtricot 4 days ago

Show HN: Airbyte Agents – context for agents across multiple data sources

I’m Michel, co-founder and CEO of Airbyte (https://airbyte.com/). We’ve spent the last six years building data connectors. Today we're launching Airbyte Agents (https://docs.airbyte.com/ai-agents/), a unified data layer for agents to discover information and take action across operational systems.

Here’s a quick walkthrough: https://www.youtube.com/watch?v=ZosDytyf1fg

As agents move into real workflows, they need access to more tools (e.g. Slack, Salesforce, Linear). That means a ton of API plumbing: authentication, pagination, filters, handling schema, and matching entities across systems.

Most MCPs don’t fix this. They’re thin wrappers over APIs, so agents inherit their weak primitives and still get it wrong most of the time, especially when working across tools.

An even deeper issue is that APIs assume you already know what to query (think endpoints, Object IDs, fields), whereas agents usually start one step earlier: they need first to discover what matters before they can even start reasoning.

So we built Airbyte Agents to be a context layer between your Agents and all of your data. The core of this is something we call Context Store: a data index optimized for agentic search, populated by our replication connectors. All that work on data connectors the last six years comes in handy here!

This gives agents a structured way to discover data, while still allowing them to read and write directly to the upstream system when needed.

What got us working on this was an insane trace from an agent we were migrating to our new SDK. It was supposed to answer "which customers are at risk of leaving this quarter?" The trace had 47 steps. Most were API calls. The agent first had to find a bunch of accounts, then map them to the right customers, then look for tickets, bla bla... and when the Agent finally responded, the answer sounded ok, but was wrong. Not only that, it was excruciatingly slow. So we had to do something about it.

That 47-step agent is one example of a question where Airbyte Agents does particularly well. Other examples: - “Show me all enterprise deals closing this month with open support tickets." - “Find every support ticket that doesn’t have a Github issue opened”

Some of these might sound simple, but the quality of the answer changes dramatically when the agent doesn’t have to assemble all that context at runtime.

Once we had an early version of the product, I spent a weekend building a benchmark harness to see if it worked. Also for fun, I like writing benchmarks :). I compared calling the Airbyte Agent MCP vs calling a bunch of vendor MCPs directly. I tested retrieval, and search.

For the sake of simplicity, I used token consumption as a unit of measure. I think that’s a good proxy for how well agents are working. A failing agent (like the one that took 47 steps), will churn through lots of tokens while getting nowhere, while a successful one will get straight to the point.

Here's what I found when measuring: for Gong, it used up to 80% fewer tokens than their own MCP, for Zendesk up to 90% fewer, for Linear up to 75%, and for Salesforce up to 16% (Salesforce’s own SOQL does a good job here).

Of course there is the usual obvious bias: we are the builders of what we are benchmarking. So we made the test harness public: https://github.com/airbytehq/airbyte-agents-benchmarks. Feel free to poke at it, and please tell us what you find if you do!

It's still early and some parts are rough, but we wanted to share this with the community asap. We'd love to hear from people building agents: - Are you indexing data ahead of time, or letting the agent call APIs live? - How are you matching entities across systems?

Would also love to hear any thoughts, comments, or ideas of how we could make this better, and if there are obvious things we’re missing. For now, we’re excited to keep building!

149 points | 47 comments
swyx 4 days ago|
(former employee here) congrats Michel! so glad to see you guys adapting to the AI age so well (and using the crap out of Devin!)

hmm so airbyte agents could serve as a form of MCP gateway, or a key building block of an MCP gateway, which btw is how anthropic uses mcp themselves for all their internal apps https://www.youtube.com/watch?v=CD6R4Wf3jnY&t=1s&pp=0gcJCd4K...

i think my most sad/interesting observation about ai engineers is that many ai apps are super data hungry, but many dont have the necessary data engineering background to even know they need an airbyte or what tradeoffs to make in an etl pipeline. would love a "data engineering for ai engineers" type braindump session from someone from airbyte at AIE (https://ai.engineer/cfp )

aaronsteers 3 days ago||
Hey, swyx! Great seeing you here.

> airbyte agents could serve as a form of MCP gateway

Exactly! And a single set of tools for agents to access both realtime (direct reads/writes) as well as cached (Context Store), bringing hopefully the best access path for each different use case.

> would love a "data engineering for ai engineers" type braindump ... at AIE

Great idea - we have a booth at AIE, and we'll submit there for a talk. Mario will reach out to you about this. :)

jeanlaf 3 days ago|||
Thanks swyx! We'd love to do that session "data engineering for ai engineers", will make you an intro to the right person in the team.
swyx 2 days ago||
saw your email, will get back!
sails 3 days ago||
I think this is right ( a big gap ) but I don’t think data companies even now what the right shape is for AI.

It’s definitely not old school ETL + dbt + BI tool, it might be something like this, but it’s very early

aaronsteers 2 days ago||
Agreed 100% - we're still super early in this journey, gathering data from our own usage and from our customers' feedback.
slurpyb 3 days ago||
Your billing support email forwards to a google group which rejects the email entirely. So i embedded my question inside the websites sales enquiry form and received multiple rounds of emails that couldn’t be further from human.

It’s not why we started using posthog but it definitely sealed the deal when you see how simple and reliable that experience is

davinchia 3 days ago||
Sorry for that experience. We had a bad billing support routing issue and it’s since been fixed. Thank you for calling it out. We'll aim to do better!
mtricot 3 days ago||
Let me see what's up and fix that!
jscheel 4 days ago||
I feel like we've been working in parallel here :) We are using PyAirbyte (hi aaronsteers) for our users to connect their data sources to our agents. We originally wanted to use the airbyte white-label platform, but the team said that it was being deprecated. I think this really drives home just how crucial it is to have a clear model for accessing your data, and Airbyte has been great at that for quite a while.
aaronsteers 3 days ago|
Hello, Jared! Small world! Yes, we did deprecate our old PbA (Powered by Airbyte) offering, but in many ways our new Agents and Embedded offering is a more robust and agent-friendly successor to that older offering.

I am happy to hear you are still getting value out of PyAirbyte! If you do try out Airbyte Agents, please let us know how it goes! We are always listening to feedback and would love to hear from you as you explore the new tools and capabilities.

dennispi 3 days ago||
We built something similar an A/B testing framework that measures Unblocked's impact on real AI coding agents.

It spawns agent CLIs (Claude Code, Codex, Cursor, GitHub Copilot) with and without Unblocked's MCP server attached, then statistically compares the results: https://github.com/unblocked/unblocked-harness-compare

We likewise measured token savings, (wall clock) time, # tool calls, and # turns.

SachitRafa 10 hours ago||
How would you know if the agent is not querying stale data here ?
jessewmc 3 days ago||
Looks interesting!

If I'm reading correctly, the indexing (Context Store) is neutral/unopinionated? How does it select fields for indexing?

Have you done any testing on guided indexing, or metadata layers on top of the data? My experience so far on similar work is that getting data in front of an agent isn't enough context to get useful/reliable answers enough of the time. I.e. _what_ you index, and how you signpost for agents, becomes really important (unless your data is super clean I guess). This does look like a good foundation for that kind of tooling though!

aaronsteers 3 days ago|
Hi, @jessewmc. Thanks for your reply. Regarding your points:

> If I'm reading correctly, the indexing (Context Store) is neutral/unopinionated? How does it select fields for indexing?

While we haven't yet published details on the backend implementation, I can say that our implementation performs very well without needing to prioritize specific fields for indexing. We aim for large text fields to perform decently and retrieval based on small/compressible fields like ints to be fast. (More to come on this in the coming months.)

> Have you done any testing on guided indexing, or metadata layers on top of the data?

We've been testing with different data scales and shapes. Nothing detailed to share yet, but performance has (so far) never itself become the bottleneck in our agent testing. (The LLM thinking itself is often the bottleneck.)

> My experience so far on similar work is that getting data in front of an agent isn't enough context to get useful/reliable answers enough of the time.

Airbyte has rich metadata on our upstream connector's data models, which I think helps us a lot to deliver helpful context to the agent. Another option, when optimizing for specific use cases, is to build your own agent tools on top of our Agent SDK. This allows you to make the calls organic and build the tools in a way that makes natural sense to the agent, regardless of source shape or which system(s) that data is coming from.

> This does look like a good foundation for that kind of tooling though!

We agree! Thanks again for sharing your thoughts here.

juancs 1 day ago||
Great launch btw! I have some questions if you don't mind

you mentioned that performance was never an issue, I am really intrigued how this is achieved.

I have 3 General questions:

1. How big (estimate in bytes) and complex were the test datasources? I couldn't find this in the benchmark repo.

2. how is the business context managed? In the blog "Airbyte Agents: A New Era for Airbyte" it was mentioned handling the business context but in the context layer docs it only talks about schema discovery (I got a bit confused)

3. When you said performance was never an issue, do you mean the user always got the answer it was looking for?

nerdright 3 days ago||
This is such a great direction airbyte is taking and congrats to the lunch! I think you're very well-positioned for this opportunity than most people realize, given your reputable brand and your uncanny expertise in etl. It's honestly a natural progression of airbyte as far as the current AI landscape goes. Kudos to you and the team!

(We use airbyte at my company, although we self-host it.)

aaronsteers 3 days ago|
Thanks! Really appreciate the kind words. Looking forward to seeing what our amazing community builds with these new tools.
andai 3 days ago||
The prompts you mentioned here sound like SQL. Is there any way to run actual SQL on these systems? Is "agents need to poke around endlessly" a symptom of the fact that there isn't a way to run an actual query?

(I'd guess there is actually SQL at the bottom layer, but there's no way to talk to it?)

sho 3 days ago||
That's actually the approach we took with https://gentility.ai/ - we either provide almost-raw SQL query access to the DBs themselves or we synthesize from API into DuckDB via parquet and make that available to the agent to just directly query. It works well - my philosophy is to give agents the sharpest tools you can, and SQL is the best tool there is.

I understand the instinct to try to make a proprietary moat around it all but I think the pattern is useful and obvious enough that all big orgs will be doing something very similar within 5 years or so.

aaronsteers 3 days ago||
Helpful feedback, thank you! And your instincts are spot on. As of now, we have API based search, with filter predicates and field selection in JSON. While we haven't published anything on the backend implementation, I can say it does use a cloud-native storage medium where the filters are indeed pushed down as SQL. We want to be careful about if/when we offer direct SQL access, specifically because SQL dialects can differ drastically and we wouldn't want to break consumers if/when we change which dialect(s) are supported.

That said, please stay tuned - and thank you again for this valuable feedback.

thecopy 3 days ago|
Super interesting idea! Congrats on the launch. Context is definitely something that is lacking in my experience. Im always frustrated when an agent cannot answer business-related questions, and i compare them to coding agents which seem to be able to answer everything. The difference is that coding agent has the context right there at the fingertips, while for business its gated behind a bunch of services and custom data models. Context is king :)

How do you handle encryption and confidentiality? Im building in this space too (MCP gateway https://www.gatana.ai/) which already have semantic search for tool outputs, and ensuring encryption and confidentiality is not trivial.

More comments...