Posted by AndrewDucker 10/25/2025
I have memory and training enabled. What I can objectively say about Atlas is that I’ve been using it and I’m hooked. It’s made me roughly twice as productive — I solved a particular problem in half the time because Atlas made it easy to discover relevant information and make it actionable. That said, affording so much control to a single company does make me uneasy.
With my repo connected via the GitHub app, I asked Atlas about a problem I was facing. After a few back-and-forth messages, it pointed me to a fork I might eventually have found myself — but only after a lot more time and trial-and-error. Maybe it was luck, but being able to attach files, link context from connected apps, and search across notes and docs in one place has cut a lot of friction for me.
I read this in my TUI RSS reader lol
So I guess the only logical next step for Big AI is to destroy the web, once they have squeezed every last bit out of it. Or at least make it dependent on them. Who needs news sites when OpenAI can do it? Why blog - just prompt your BlogLLM with an idea. Why comment on blogs - your agent will do it for you. All while avoid child porn with 97% accuracy - somerhing human curated content surely cannot be trusted to do.
So I am 0% surprised.
I imagine a future where websites (like news outlets or blogs) will have something like a “100% human created” label on it. It will be a mark of pride for them to show off and they’ll attract users because of it
> The amount of data they're gathering is unfathomable.
The author suggests that GPT continuously reads information like typed keystrokes etc, I don't see why that's implied. And it wouldn't be new either, with tools like Windows Replay.
Because "it wouldn't be new either, with tools like Windows Replay".
Genuinely, why would they leave any data on the table if they don't have to? That's the entire purpose of the browser.
Because extracting and using that data may not be trivial, practically and legally speaking.
I am sure they use the chat input you send them for training. But to say that they transfer all the websites you visit to their servers, or that they can monitor your keyboard input either by continuously streaming your window or your keyboard events to their servers. All of that would be a small technical feat and noticeable in the created traffic.
My belief is that they simply hacked together an AI web-browser in the least amount of time with the least amount of effort, so that they can showcase another use for AI.
That would be a much simpler explanation than them building a personal surveillance tool that wants to know what you've typed and keeps track of the time you've spent looking for shoes.
But this feels truly dystopian. We here on HN are all in our bubble, we know that AI responses are very prone to error and just great in mimicking. We can differentiate when to use and when not (more or less), but when I talk to non-tech people in a normal city not close to a tech hub, most of them treat ChatGPT as the all-knowing factual instance.
They have no idea of the concious and unconcious bias on the responses, based on how we ask the questions.
Unfortunately I think these are the majority of the people.
If you combine all that with a shady Silicon Valley CEO under historical pressure to make OpenAI profitable after 64 billion in funding, regularly flirting with the US president, it seems always consequential to me that exactly what the author described is the goal. No matter the cost.
As we all feel like AI progress is stagnating and mainly the production cost to get AI responses is going down, this almost seems like the only out for OpenAI to win.
The article does taste a bit "conspiracy theory" for me though
GUIs emerged to make things easier for users to tell their computers what to do. You could just look at the screen and know that File > Save would save the file instead of remembering :w or :wq. They minimized friction and were polished to no end by companies like MSFT and AAPL.
Now that technology has got to a point where our computers now can bridge the gap between what we said and what we meant reasonably well, we can go back to CLIs. We keep the speed and expressiveness of typing but without the old rigidity. I honestly can't wait for the future where we evolve interfaces to things we previously only dreamt of before.
Particularly when you throw in agentic capabilities where it can feel like a roll of the dice if the LLM decides to use a special purpose tool or just wings it and spits out its probabilistic best guess.
The bridge would come from layering natural languages interfaces on top of deterministic backends that actually do the tool calling. We already have models fine-tuned to generate JSON schemas. MCP is a good example of this kind of stuff. It discovers tools and how to use them.
Of course, the real bottle neck would be running a model capable of this locally. I can't run any of models actually capable of this on a typical machine. Till then, we're effectively digital serfs.
can never go back
Sounds like the browser did you a favor. Wonder if she'll be suing.