Top
Best
New

Posted by ianrahman 2 days ago

Claude in Chrome(claude.com)
314 points | 191 comments
CAP_NET_ADMIN 2 days ago|
Let's spend years plugging holes in V8, splitting browser components to separate processes and improving sandboxing and then just plug in LLM with debugging enabled into Chrome. Great idea. Last time we had such a great idea it was lead in gasoline.
int32_64 2 days ago||
It's clear the endgame is to cook AI into Chrome itself. Get ready for some big antitrust lawsuit that settles in 20 years when Gemini is bundled too conveniently and all the other players complain.

https://developer.chrome.com/docs/ai/built-in-apis

spyder 2 days ago|||
"that settles in 20 years "

And at that point it will be a fight mostly between AI lawyers :-)

donohoe 2 days ago||
Which will settle it quickly under the watchful AI judiciary.
blubber 2 days ago||
Two AI agents fighting couldn't end up in an infinite loop?
SaltyBackendGuy 1 day ago||
More billable hours.
thrance 2 days ago||||
We'll soon get Manifest V4 that, for "security reasons", somehow includes clauses banning any AI other than Gemini from using the browser.
arthurcolle 2 days ago|||
That's too easy. It'll be more subtle. Compatibility MCP-Gemini for "security" so it slurps in more data from all the other AIs
bigyabai 2 days ago||
And then a flat fee whenever anyone links-out from your proprietary, inescapable MCP backend. It's a legal free money hack!
arthurcolle 2 days ago||
That would suck. Is Google going to just eat all of this?
bigyabai 2 days ago||
I'm not sure, all of my devices run a Firefox fork.
Forgeties79 2 days ago||||
“For your safety and protection from potentially malicious and unverified vendors.”
inquirerGeneral 2 days ago|||
[dead]
fragmede 1 day ago|||
20 years? It's already there! https://gemini.google/overview/gemini-in-chrome/
sheepscreek 2 days ago|||
This made me want to laugh so hard. I think this idea came from the same place as beta testing “Full Autopilot” with human guinea pigs. Great minds…

Jokes aside, Anthropic CEO commands a tad more respect from me, on taking a more principals approach and sticking to it (at least better than their biggest rival). Also for inventing the code agent in the terminal category.

stingraycharles 2 days ago|||
All things considered Anthropic seems like they’re doing most things the right way, and seemed to be focused on professional use more than OpenAI and Grok, and Opus 4.5 is really an incredibly good model.

Yes, they know how to use their safety research as marketing, and yes, they got a big DoD contract, but I don’t think that fundamentally conflicts with their core mission.

And honestly, some of their research they publish is genuinely interesting.

IAmGraydon 2 days ago||||
>Also for inventing the code agent in the terminal category.

Not even close. That distinction belongs to Aider, which was released 1.5 years before Claude Code.

sheepscreek 2 days ago|||
Oh cool, I didn’t know that.
bpavuk 2 days ago|||
let me be a date-time nerd for a split second:

- Claude Code released Introducing Claude Code video on 24 Feb 2025 [0]

- Aider's oldest known GitHub release, v0.5.0, is dated 8 Jun 2025 [1]

[0]: https://www.youtube.com/watch?v=AJpK3YTTKZ4

[1]: https://github.com/Aider-AI/aider/releases/tag/v0.5.0

jeeeb 2 days ago|||
That’s 8th of June 2023 not 2025.. almost 2 years before Claude Code was released.

I remember evaluating Aider and Cursor side by side before Claude Code existed.

social_quotient 2 days ago||||
Hey your dates are wildly wrong... It’s important people know aider is 2023. 2 years before CC
IAmGraydon 2 days ago||||
Wrong. So wrong, in fact, that I’m wondering if it’s intentional. Aider was June 2023.
bpavuk 2 days ago||
sorry, editing it out! thanks for pointing out.

EDIT: I was too late to edit it. I have to keep an eye on what I type...

CuriouslyC 2 days ago||||
Dario is definitely more grounded than Sam, I thought Anthropic would get crowded out between Google and the Chinese labs, but they might be able to carve out a decent niche as the business focused AI for people who are paranoid about China.

They didn't invest terminal agents really though, Aider was the pioneer there, they just made it more autonomous (Aider could do multiple turns with some config but it was designed to have a short leash since models weren't so capable when it was released).

sheepscreek 1 day ago||
I acknowledged the point about Aider being the first terminal agent in a different comment. I am equally surprised at how well Anthropic has done compared to rest of the pack (Mistral comes to mind, had a head start but seems to have lost its way.

They definitely have found a good product-market fit with white collar working professional. 4.5 Opus gets the best balance between smarts and speed.

mejutoco 2 days ago||||
> Also for inventing the code agent in the terminal category.

Maybe I am wrong, but wasnt aider first?

stingraycharles 2 days ago|||
They are not at all the same thing. For starters, even ‘till this day, it doesn’t support ReAct-based tool calling.

It’s more like an assistant that advices you rather than a tool that you hand full control to.

Not saying that either is better, but they’re not the same thing.

CuriouslyC 2 days ago||
Aider was designed to do single turns becasue LLMs were way worse when it was created. That being said, Aider could do multiple turns of tool calling if command confirmation was turned off, and it was trivial to configure Aider to do multiple turns of code generation by having a test suite that runs automatically on changes and telling Aider to implement functionality to get the tests to pass. It's hard coded to only do 3 autonomous turns by default but you can edit that.
stingraycharles 1 day ago||
Yes but unfortunately it appears that Aider development has completely stopped. There had been an MCP support PR that was open for over half a year, many people validated it and worked on it but the project owner never responded.

It’s a bit of a shame, as there are plenty of people that would love to help maintain it.

I guess sometimes that’s just how things go.

afro88 2 days ago|||
Aider wasn't really an agentic loop before Claude Code came along
mejutoco 2 days ago|||
I would love to know more. I used aider with local models and it behaved like cursor in agent mode. Unfortunately I dont remember exactly when (+6 months ago at least). What was your experience with it?
afro88 2 days ago||
I was a heavy user, but stopped using it mid 2024. It was essentially providing codebase context and editing and writing code as you instructed - a decent step up from copy/paste to ChatGPT but not working in an agentic loop. There was logic to attempt code edits again if they failed to apply too.

Edit: I stand corrected though. Did a bit of research and aider is considered an agentic tool by late 2023 with auto lint/test steps that feedback to the LLM. My apologies.

ErikBjare 2 days ago|||
Plenty of aider-era tools were though, like my own gptme which is about as old as aider
Workaccount2 2 days ago|||
Anthropic isn't any more moral or principled than the other labs, they just saw the writing on the wall that they can't win and instead decided to focus purely on coding and then selling their shortcomings as some kind of socially conscious effort.

It's a bit like the poorest billionaire flexing how environmentally aware they are because they don't have a 300ft yacht.

sheepscreek 1 day ago|||
Maybe - they’ve certainly fooled me if that’s the case. I took them at face value and so far they haven’t done anything out of character that would make me weary of them.

Their models are good. They did not use prompts for training from day one (Google is the worst offender here amongst the three). Have been shockingly effective with “Claude Skills”. Contributed MCP to the world and encouraged its adoption. Now did the same for skills, turning it into a standard.

They are happy to be just the tool that helps people get the job done.

JohnnyMarcone 1 day ago|||
How do you know?
conradev 2 days ago|||
The cycle must not be broken https://xkcd.com/2044/
markm248 2 days ago|||
AllI want is a secure system where it's easy to do anything I want. Is that so much to ask?
mFixman 2 days ago||||
The thing AI miss about the internet from the late 2000s and early 2010s was having so much useful data available, searchable, and scrappable. Even things like "which of my friends are currently living in New York?" are impossible to find now.

I always assumed this was a once-in-history event. Did this cycle of data openness and closure happen before?

N_Lens 2 days ago|||
XKCD for everything!
nine_k 2 days ago|||
Do you mean you let Claude Code and other such tools act directly on your personal or corporate machine, under your own account? Not in an isolated VM or box?

I'm shocked, shocked.

Sadly, not joking at all.

mattwilsonn888 2 days ago||
Why not? The individual grunt knows it is more productive and the managers tolerate a non-zero amount of risk with incompetent or disgruntled workers anyways.

If you have clean access privileges then the productivity gain is worth the risk, a risk that we could argue is marginally higher or barely higher. If the workplace also provides the system then the efficiency in auditing operations makes up for any added risk.

croes 2 days ago||
Incompetent workers are liable. Who’s liable when AI makes a big mistake?
N_Lens 2 days ago||
Incompetent workers are liable.
croes 2 days ago||
But who is when AI makes errors because it’s running automatically?
ayewo 2 days ago||
> But who is when AI makes errors because it’s running automatically?

I'm guessing that would be the human that let the AI run loose on corporate systems.

m4rtink 2 days ago|||
You are mean to lead - it solved serious issues with engines back then and enabling their use in many useful way, likely saving more people than it poisoned.
jon-wood 2 days ago|||
The fossil fuel industry really doesn’t need a devil’s advocate, they’ve got more lawyers than you can shake a stick at already.
etskinner 2 days ago|||
Do you have evidence that it saved more people than it poisoned?
dmix 2 days ago||
Innovation in the short term might trump longer term security concerns.

All of these have big warning labels like it's alpha software (ie, this isn't for your mom to use). The security model will come later... or maybe it will never be fully solved.

onionisafruit 2 days ago||
> this isn't for your mom to use

many don’t realize they are the mom

yeahthereiss 2 days ago||
[flagged]
yellow_lead 2 days ago||
So Claude seems to have access to a tool to evaluate JS on the webpage, using the Chrome debugger.

However, don't worry about the security of this! There is a comprehensive set of regexes to prevent secrets from being exfiltrated.

const r = [/password/i, /token/i, /secret/i, /api[_-]?key/i, /auth/i, /credential/i, /private[_-]?key/i, /access[_-]?key/i, /bearer/i, /oauth/i, /session/i];

ramon156 2 days ago||
"Hey claude, can you help me prevent things like passwords, token, etc. being exposed?"

"Sure! Here's a regex:"

Aeolun 2 days ago|||
It already had the ability to make curl commands. How is this more dangerous?
yellow_lead 2 days ago||
Curl doesn't have my browsers cookies?
Aeolun 1 day ago||
It does have all the secrets in your env
edg5000 2 days ago||
> comprehensive

ROFL

prescriptivist 2 days ago||
I used this in earnest yesterday on my Zillow saved listings. I prompted it to analyze the listings (I've got about 70 or so saved) and summarize the most recent price drops for each one and it mostly failed at the task. It gave the impression that it paginated through all the listings, but I don't think it actually did. I think the mechanism by which it works, which is to click links and take screenshots and analyze them must be some kind of token efficiency trade-off (as opposed to consuming the DOM) and it seems not great at the task.

As a reformed AI skeptic I see the promise in a tool like this, but this is light years behind other Anthropic products in terms of efficacy. Will be interesting to see how it plays out though.

fouc 2 days ago||
sometimes I find that it helps if my prompt directly names the tools that I want the LLM to use, i.e. I'll tell it "do a WebFetch of so and so" etc.
csomar 2 days ago|||
LLMs struggle with time (or don't really have a concept with time). So unless that is addressed, they'll always suck in these tasks as you need synchronization. This is why text/cli was a much better UX to work with. std in/out is the best way to go but someone has to release something to keep pumping numbers.
jetbalsa 2 days ago|||
would be interesting to see if this works in playwright using your existing browser's remote control APIs (Using claude code via the playwright mcp)
baby_souffle 2 days ago||
I've had extensive luck doing just that. Spend some time doing the initial work to see how the page works and then give the llm examples of the HTML that should be clicked for next page or the css classes that indicate the details you're after and then ask for a playwright to yaml tool.

Been doing this for a few months now to keep an eye on the prices for local grocery stores. I had to introduce random jitter so Ali Express wouldn't block me from trying to dump my decade+ of order history.

jstummbillig 2 days ago|||
> light years behind

So... give it another 3 month? (I assume we are talking AI light years)

jazzyjackson 2 days ago||
What an asinine strategy to feed screenshots (does it scroll down and render the whole page?)

I had good luck treating HTML as XML and having Claude write xpath queries to grab useful data without ingesting the whole damn DOM

yoan9224 2 days ago||
The security concerns here are valid, but I think people are missing the practical reality: we've already crossed the Rubicon with tools like Claude Code and Playwright MCP.

I've been running Claude Code with full system access for months - it can already read files, execute bash, git commit, push code. Adding browser automation via an extension is actually less risky than what we're already doing with terminal access.

The real question isn't "should we give AI browser access" - it's "how do we design these systems so the human stays in the loop for critical decisions?" Auto-approving every action defeats the purpose of the safety rails.

Personally, I use it with manual approval for anything touching credentials or payments. Works great for QA testing and filling out repetitive web forms.

nicoburns 2 days ago||
> we've already crossed the Rubicon with tools like Claude Code and Playwright MCP.

"we" isn't everybody here. A lot of us simply don't use these tools (I currently still don't use AI assistance at all, and if/when I do try it, I certainly won't be giving it full system access). That's a lot harder to avoid if it's built into Chrome.

jazzyjackson 2 days ago|||
I would personally feel a lot better with a container first approach, like attaching an LLM to QubesOS windows, so the non-deterministic chaos monkey can only effect what you want them to effect

This is easy enough with dev containers but once you let a model interact with your desktop, you should be really damn confident in your backup, rollback, and restore methods, and whether an errant rm rf or worse has any way to effect those.

IME even if someone has a cloud drive and a local external drive backup they've never actually tested the recovery path, and will just improvise after an emergency.

A snapshotted ZFS system pushing to something like rsync.net (which also stores snapshots) but I don't know of any timemachine-in-a-box solutions like Apple offers (is there still a time machine product actually? Maybe it's as easy as using that, since a factory reset Mac can restore from a time machine snapshot)

what-the-grump 2 days ago||
People are using these tools to write code, complete tasks, etc. your worry is that what... It will rm -rf /* something?

I am not trying to be funny but the Claude itself is smart enough to catch destructive actions and double check. Its not going to wake up and start eating your machine, googling a random script and running it which what a lot of people do in many cases leads to worse outcomes, here at least you can ask the model what might happen to my computer.

PessimalDecimal 16 hours ago|||
> your worry is that what... It will rm -rf /* something?

There are many, many stories exactly like this. E.g. from two weeks ago https://www.reddit.com/r/technology/comments/1pe0s4x/googles....

jazzyjackson 2 days ago|||
Pushing your repo is all well and good, I just don't understand why someone would expose their user files on a personal machine
redactsureAI 1 day ago||
I actually have a full browser plus AI agent containerized. Is that something you think might be a fun opensourced?

I have a product but also to build it I have some test environments I had to make to debug things.

Basically I have a full AI agent in one container that can control a browser in another container. Was considering open sourcing, any thoughts?

subsection1h 2 days ago|||
> we've already crossed the Rubicon with tools like Claude Code

I install all dev tools and project dependencies on VMs and have done so since 2003.

> Adding browser automation via an extension is actually less risky than what we're already doing with terminal access.

I won't even integrate my password manager (pass) into a browser.

redactsureAI 1 day ago||
Same I find it clumsy to actually build and run code on your host system.

Most I will do is run containers on my local machine but all dev is in cloud.

alexdobrenko 2 days ago||
what do you mainly use it for?
buremba 2 days ago||
After Claude Code couldn't find the relevant operation neither in CLI nor the public API, it went through its Chrome integration to open up the app in Chrome.

It grabbed my access tokens from cookies and curl into the app's private API for their UI. What an amazing time to be alive, can't wait for the future!

ethmarks 2 days ago||
Security risks aside, that's pretty remarkable problem solving on Claude's part. Rather than hallucinating an answer or just giving up, it found a solution by creatively exercising its tools. This kind of stuff was absolute sci-fi a few years ago.
sethops1 2 days ago|||
Or this behavior is just programmed, the old fashioned way.
roxolotl 2 days ago|||
This is one of the things that’s so frustrating about the AI hype. Yes there are genuinely things these tools can do that couldn’t be done before, mostly around language processing, but so much of the automation work people are putting them up to just isn’t that impressive.
jgilias 2 days ago||
But it’s precisely the automation around LLMs that make the end result itself impressive.
ramoz 2 days ago||||
A sufficiently sophisticated agent, operating with defined goals and strategic planning, possesses the capacity to discover and circumvent established perimeters.
csomar 2 days ago|||
Honestly, I think many hallucinations are the LLM way of "moving forward". For example, the LLM will try something, not ask me to test (and it can't test it, itself) and then carry on to say "Oh, this shouldn't work, blabla, I should try this instead.

Now that LLMs can run commands themselves, they are able to test and react on feedback. But lacking that, they'll hallucinate things (ie: hallucinate tokens/API keys)

braebo 2 days ago||
Refusing to give up is a benchmark optimization technique with unfortunate consequences.
csomar 2 days ago||
I think it's probably more complex than that. Humans have constant continuous feedback which we understand as "time". LLMs do not have an equivalent to that and thus do not have a frame of reference to how much time passed between each message.
abigail95 2 days ago||
That's fantastic
simonw 2 hours ago||
I used this to figure out a Cloudflare setting by navigating their dashboard for me, it worked well: https://simonwillison.net/2025/Dec/22/claude-chrome-cloudfla...
arjunchint 2 days ago||
All this talk of safety but they are using Debugger permission that exposes your device to vulnerabilities, slows down your machine, and get you captchas/bot detected on sites

Working on a competing extension, rtrvr.ai, but we are more focused on vibe scraping use cases. We engineered ours to avoid these sensitive/risky permissions and Claude should too, especially when releasing for end consumers

dangus 2 days ago||
Nice ad. Love your 2004 disemvoweled company name.
arjunchint 1 day ago||
We got this domain on the cheap, haha!

Goal is to raise funding and then fill back the vowels

dangus 19 hours ago||
Yikes. If I was an investor that statement would be a red flag on your decision making capability.

fetchai.app, $65 renews at $23/year

obtainer.net, .dev, .app, .tech, all available at standard prices

retrieveragent.io, .tech, .app, .dev, all at standard prices

This is like 10 minutes of effort on my end.

andybak 2 days ago||
I asked it to do a task that doesn't require spreadsheets but it keeps asking for access to my google drive.
arjunchint 1 day ago||
It uses Google Sheets as a "memory layer" for complex workflows to orchestrate multi tab sub agents for example where per row an independent sub agent tab is launched to execute and write back new columns.

We only request drive.file permission so create new sheets or access to ones explicitly granted access to us via Google Drive Picker

andybak 1 day ago||
That needs to be explained at the point the permission is requested
xnx 2 days ago||
Good to see. Google only has this feature in experimental mode for $125/month subscribers: https://labs.google.com/mariner/landing

Google allows AI browser automation through Gemini CLI as well, but it's not interactive and doesn't have ready access to the main browser profile.

londons_explore 2 days ago||
It's part of antigravity for free. Just make a blank workspace and ask it to use a browser to do X and it'll start chrome and start navigating, clicking, scrolling, etc.
qingcharles 2 days ago||
Yeah, I only found it by accident when I asked it to make a change against my web app and it modified the code then popped open Chrome and started trying different common user/pass combinations to log into the app so it could validate the changes.
grugagag 2 days ago||
Wait, It was brute forcing passwords? This sounds extremely dangerous in the wrong hands. Seems like a boon for malicious users
londons_explore 1 day ago|||
A human in that position would try a few obvious things like "admin/admin" and then go hunting in the readme to see if a specific user is documented for testing and then maybe go to the user database and see if there is an existing admin user and maybe reset the password to get in.
qingcharles 2 days ago|||
Yeah, I didn't see what passwords it typed but it was trying usernames like "testuser" and stuff :p
CPLX 2 days ago||
Chrome's DevTools MCP has been excellent in my experience for web development and testing. Claude code can jump in there and just pretend to be a user and do just about everything, including reading console output.

I'm not using it for the use case of actually interacting with other people's websites, but for this purpose, it's been fantastic.

crashabr 2 days ago||
I've been wondering if it was a good replacement for the playwright mcp, at least for chrome-only testing.
s900mhz 2 days ago|||
I personally replaced my playwright mcp with this. Seems to use less context and generally more reliable.
gedy 2 days ago|||
After a lot of trouble trying to get playwright mcp to work on Linux, I'm curious if this works better
greatgib 1 day ago||
What amaze me is all these websites like Expedia or Airbnb that would open MCP api when they carefully prevented for years scraping and equivalent things.

Nowadays, a lot of things that people are impressed by agents doesn't even really need AI but just a way for us to get data and api access back to (web)app. Something we more commonly used to have like 15 years ago.

For example, when looking at possible destination for a trip, I would just need to be able to do the given request without spending one hour on the website.

esafak 2 days ago|
Essentially a replacement for Chrome Devtools MCP, liberating your context from MCP definitions. However, the reviews are poor: https://chromewebstore.google.com/detail/claude/fcoeoabgfene...
More comments...