Posted by namukang 4/7/2025
1) this projects' chrome extension sends detailed telemetry to posthog and amplitude:
- https://storage.googleapis.com/cobrowser-images/telemetry.pn...
- https://storage.googleapis.com/cobrowser-images/pings.png
2) this project includes source for the local mcp server, but not for its chrome extension, which is likely bundling https://github.com/ruifigueira/playwright-crx without attribution
super suss
1. Yes, the extension uses an anonymous device ID and sends an analytics event when a tool call is used. You can inspect the network traffic to verify that zero personalized or identifying information is sent.
I collect anonymized usage data to get an idea of how often people are using the extension in the same way that websites count visitors. I split my time between many projects and having a sense of how many active users there are is helpful for deciding which ones to focus on.
2. The extension is completely written by me, and I wrote in this GitHub issue why the repo currently only contains the MCP server (in short, I use a monorepo that contains code used by all my extensions and extracting this extension and maintaining multiple monorepos while keeping them in sync would require quite a bit of work): https://github.com/BrowserMCP/mcp/issues/1#issuecomment-2784...
I understand that you're frustrated with the way I've built this project, but there's really nothing nefarious going on here. Cheers!
Knee-jerk reactions aren't helpful. Yes, too much tracking is not good, but some tracking is definitely important to improving a product over time and focusing your efforts.
This is showstopper.
Noble reasons won’t matter.
Spyware perception.
Any other mode of operation is morally bankrupt.
I don't sign a term sheet when I order at McDonalds but you can be damn sure they count how many big macs I order. Does that make them morally bankrupt? Or is it just a normal business operation that is actually totally reasonable?
Yes, it does.
It's 2025 - we want informed consent and voluntary participation with the default assumption that no, we do not want you watching over our shoulders, and no, you are not entitled to covertly harvest all the data you want and monetize that without notifying users or asking permissions. The whole ToS gotcha game is bullshit, and it's way past time for this behavior to stop.
Ignorance and inertia bolstering the status quo doesn't make it any less wrong to pile more bullshit like this onto the existing massive pile of bullshit we put up with. It's still bullshit.
If they were tracking my identity across sites and actually selling it to the highest bidder that's one thing that we'll definitely agree on. This is so so far from that.
You're welcome to build and use your own MCP browser automation if you're so hostile to the developer that built something cool and free for you to use.
Any covert, involuntary, automatic surveillance of a person for any reason whatsoever should have a court order and legal authority behind it - it's gross and exposes the target to vulnerabilities they're not cognizant of.
For telemetry tracking user behavior to be useful at all, it's got to be associated with a user. The idea of telemetry anonymization is marketing speak for "we obfuscated it, we know deanonymization is trivial, but people are stupid, especially regulators."
Any anonymization done is sufficiently obfuscated such that corporate asses get covered in the case of any regulatory investigation. There's no legitimate, mathematically valid anonymization of user data that you could do without destroying the information that you're trying to get in the first place through these tools. This means that any aggregation of user data useful to a malicious actor will inevitably be compromised - the second Posthog or Amplitude become a desirable target, they'll get pwned and breached, and much handwringing will be done, and there will be no recourse or recompense for damages done.
The only strategy to prevent the dissemination of surveillance data is not to collect it in the first place. It should be illegal to collect the data without voluntary, user initiated participation, and any information collected should be ephemeral with regular inspection to ensure compliance. Any violation of user privacy should result in crippling fines, something like 5% of the value of the company per user per day of violation - if you can't responsibly manage the data, you shouldn't be collecting it.
This means all the automatic continuous development a/b testing intrusive corner cutting corporate bullshit would have to stop. Continually leaking surveillance data to malicious actors year over year with no repercussions has thoroughly demonstrated that people cannot be trusted with safekeeping data.
I will build and use my own automation if I need to, based on products that don't covertly, involuntarily, ignorantly surveil their users, without even being aware of potential for harm, and I'll continue to point it out when it shows up in random projects and products, because it's wrong and it should stop.
We should stop embracing the things that enshittify the world, and stop sacrificing things like "other people's privacy" for convenience or profit.
Keep in mind, extensions can update themselves at any time, including when they're bought out by someone else. In fact, I bet that's a huge draw... imagine buying an extension that "can read and modify data on all your websites" and then pushing an update that, oh I dunno, exfiltrates everyone's passwords from their gmail. How would most people even catch that?
DO NOT have any extensions running by default except "on click".
There should be at least some kind of static checker of extensions for their calls to fetch or other network APIs. The Web is just too permissive with updating code, you've got eval and much more. It would be great if browsers had only a narrow bottleneck through which code could be updated, and would ask the user first.
(That wouldn't really solve everything since there can be sleeper code that is "switched on" with certain data coming over the wire, but better than what we have now.)
I think the permission system should be much more complicated so that the user gets a prompt that explains what is needed and why.
Furthermore there should be [paid] independent reviewers to sign off on extensions. This adds a lot of credibility, specially to a first time publication without users. That would also give app stores someone to talk to before deleting something. Nefarious actors working for app stores can have their credibility questioned.
Keep in mind, extensions can update themselves at any time
GP suggested only installing extensions you can build yourself from source. Most extensions that auto update do so via the Chrome store. If you install an extension from source, that won't happen.You’d be surprised. It describes all the extensions I use.
"Avoids bot detection and CAPTCHAs by using your real browser fingerprint."
Yeah, not really.
I've used a similar system a few weeks back (one I wrote myself), having AI control my browser using my logged in session, and I started to get Captcha's during my human sessions in the browser and eventually I got blocked from a bunch of websites. Now that I've stopped using my browser session in that way, the blocks eventually went away, but be warned, you'll lose access yourself to websites doing this, it isn't a silver bullet.
Also I assume this extension is pretty obvious so it wont take long for CF bot detection to see it the same as playwrite or whatever else.
Hence why projects like this exist: https://github.com/Kaliiiiiiiiii-Vinyzu/patchright. They hide the debugging part from JavaScript.
Screen readers need to see a de-bullshittified, machine-readable version of the site + this is required by law sometimes, and generally considered a nice thing to enable -> the site becomes not just screen-reader friendly, but end user automation-friendly in general.
(I don't know how long this will hold, though. LLMs are already capable of becoming a screen reader without any special provisions - they can make sense of the UI the same way a sighted person can. I wouldn't trust them much now, but they'll only get better.)
> These Captchas are really bad at detecting bots and really good at falsely labelling humans as bots.
As a human it feels that way to you. I suspect their false-positive rate is very low.
Of course, you may well be right that you get pinged more because of your style of browsing, which sux.
source: I work in a team that uses this kind of bot detection and yes, it works. And yes we do our best to keep false positives down
Back when I was playing Call of Duty 4, I got routinely accused of cheating because some people didn't think it was possible to click the mouse button as fast as I did.
To them it looked like I had some auto-trigger bot or Xbox controller.
I did in fact just have a good mouse and a quick finger.
If CloudFlare mislabels you as a bot, however, you may be unable to access medical services, or your bank account, or unable to check in for a flight, stuff like that. Actual important things.
So yes, I think it's not unreasonable to expect more from CF. The fact that some humans are routinely mischaracterized as bots should be a blocker level issue.
I've never failed the CF bot test so don't know how that feels. Though I have managed to get to level 8 or 9 on Google's ReCaptcha in recent times, and actually given up a couple of times.
Though my point was just it's gonna boil down to a duck test, so if you walk like a duck and quack like a duck, CF might just think you're a duck.
Yes, this is a big signal they use.
> adding some more human like noise to the mouse
Yes, this is a standard avoidance strategy. Easier said than done. For every new noise generation method, they work on detection. They also detect more global usage patterns and other signals, so you'd need to immitate the entire workflow of being human. At least within the noise of their current models.
"Avoids bot detection and CAPTCHAs" - Sure asshole, but understand that's only in place because of people like you. If you truly need access to something, ask for an API, may you need to pay for it, maybe you don't. May you get it, maybe the site owner tells you to go pound sand and you should take that as you're behaviour and/or use case is not wanted.
Most of the automated misbehavior is businesses doing it to other businesses - in many cases, it's direct competition, or a third party the competition outsources it to. Hell, your business is probably doing it to them too (ask the marketing agency you're outsourcing to).
> If you truly need access to something, ask for an API, may you need to pay for it, maybe you don't.
Like you'd give it to me when you know I want it to skip your ads, or plug it to some automation or a streamlined UI, so I don't have to waste minutes of my life navigating your bloated, dog-slow SPA? But no, can't have users be invisible in analytics and operate outside your carefully designed sales funnel.
> May you get it, maybe the site owner tells you to go pound sand and you should take that as you're behaviour and/or use case is not wanted.
Like they have a final say in this.
This is an evergreen discussion, and well-trodden ground. There is a reason the browser is also called "user agent"; there is a well-established separation between user's and server's zone of controls, so as a site owner, stop poking your nose where it doesn't belong.
--
[0] - Not "you" 'mrweasel personally, but "you" the imaginary speaker of your second paragraph.
If you have a sales funnel, as in you take orders and ship something to a customer, consumer or business, I almost guarantee you that you can request an API, if the company you want to purchase from is large enough. They'll probably give you the API access for free, or as part of a signup fee and give you access to discounts. Sometimes that API might be an email, or a monthly Excel dump, but it's an API.
When we're talking site that purely survive on tracking users and reselling their data, then yes, they aren't going to give you API access. Some sites, like Reddit does offer it I think, but the price is going to be insane, reflecting their unwillingness to interact with users in this way.
> Not "you" 'mrweasel personally
Understood, but thank you :-)
I wasn't thinking primarily about tracking and ads here either, when it comes to B2B automation. What I meant was e.g. shops automatically scrapping competing stores on a continued basis, to adjust their own prices - a modern version of the old "send your employees incognito to the nearby stores and have them secretly note down prices". Then you also have comparison-shopping (pricing aggregators) sites that are after the same data, too.
And then of course there's automated reviews (reading and writing), trying to improve your standing and/or sabotage competition. There's all kinds of more or less legit business intelligence happening, etc. Then there's wholesale copying of sites (or just their data) for SEO content farms, and... I could go on.
Point being, it's not the people who want to streamline their own work, make access more convenient for themselves, etc. that are the badly-behaving actors and reasons for anti-bot defenses.
> If you have a sales funnel, as in you take orders and ship something to a customer, consumer or business, I almost guarantee you that you can request an API, if the company you want to purchase from is large enough. They'll probably give you the API access for free, or as part of a signup fee and give you access to discounts. Sometimes that API might be an email, or a monthly Excel dump, but it's an API.
The problem from a POV of a regular users like me is, I'm not in this for business directly; the services I use are either too small to bother providing me special APIs, or I am too small for them to care. All I need is to streamline my access patterns to services I already use, perhaps consolidate it with other services (that's what MCP is doing, with LLM being the glue), but otherwise not doing anything disruptive to their operations. And I'm denied that, because... Bots Bad, AI Bad, Also Pay Us For Privilege?
> When we're talking site that purely survive on tracking users and reselling their data, then yes, they aren't going to give you API access. Some sites, like Reddit does offer it I think, but the price is going to be insane, reflecting their unwillingness to interact with users in this way.
Reddit is an interesting case because the changes to their API and 3rd-party client policies happened recently, and clearly in response to the rise of LLMs. A lot of companies suddenly realized the vast troves of user-generated content they host are valuable beyond just building marketing profiles, and now they try to lock it all up in order to extort rent for it.
and then the LLM model will ask the MCP server to call the functions, check the result, call the next function if needed, etc
Right now if you go to ChatGPT you can't really tell it "open Google maps with my account, search for bike shops near NYC, and grab their phone numbers", because all he can do is reply in text or make images
with a "browser MCP" it is now possible: ChatGPT has a way to tell your browser "open Google maps", "show me a screenshot", "click at that position", etc
Is this what 'calling' is?
It seems strange to me to focus on this sort of standard well in advance of models being reliable enough to, ya know, actually be able perform these operations on behalf of the user with any sort of strong reliability that you would need for widespread adoption to be successful.
Cryptocurrency "if you build it they'll come" vibes.
Believe me. It's not there yet.
I was referring more broadly to ClaudePlaysPokemon, a twitch stream where claude is given tool calling into a Gameboy Color emulator in order to try to play Pokemon. It has slowly made progress and i recommend looking at the stream to see just how flawed LLM's are currently for even the shortest of timelines w.r.t. planning.
I compared the two because the tool calling API here is a similar enough to an MCP configuration with the same hooks/tools (happy to be corrected on that though)
EDIT: Don't get me wrong, the benchmark scores are indeed higher, but in my personal experience, LLMs make as many mistakes as they did before, still too unreliable to use for cases where you actually need a factually correct answer.
Yes, MCP is a way to streamline giving LLMs ability to run arbitrary code on your machine, however indirectly. It's meant to be used on "your side of the airlock", where you trust the things that run. Obviously it's too powerful for it to be used with third-party tools you neither trust nor control; it's not that different than downloading random binaries from the Internet.
I suppose it's good to spell out the risks, but it doesn't make sense blaming MCP itself, because those risks are fundamental aspects of the features it provides.
It introduces a substantial set of novel failure modes, like cross-tool shadowing, which aren't obvious to most folks. Making use of any externally developed tooling — even open source tools on internal architecture — requires more careful consideration and analysis than most would expect. Despite the warnings, there will certainly be major breaches on these lines.
The article also reeks of LLM ironically
https://invariantlabs.ai/blog/mcp-security-notification-tool...
So im not sure id give up the sum total progress of the automobile just because the first decade was a bad one
Is there any browser that can do this yet as it seems extremely useful to be able to extract details from the page!
Would also be interested in hearing more about what you’re envisioning for your use case. Are you thinking a browser extension that acts on sites you’re already on, or some sort of shopping aggregator that lets you do this, or something else entirely?
Example: find me all of the desks on IKEA that come in light coloured wood, are 55 inches wide, and rank them from deepest to shallowest. Oh, and make sure they're in stock at my nearest IKEA, or are delivering within the next week.
I don't know if you've done it already, but it would be great to pause automation when you detect a captcha on the page and then notify the user that the automation needs attention. Playwright keeps trying to plough through captchas.
Is there an issue with the lag between what is happening in the browser and the MCP app (in my case Claude Desktop)?
I have a feeling the first time I tried it, I was fast enough clicking the "Allow for this chat" permissions, whereas by the time I clicked the permission on subsequent chats, the LLM just reports "It seems we had an issue with the click. Let me try again with a different reference.".
Actions which worked flawlessly the first time (rename a Google spreadsheet by clicking on the title and inputting the name) fail 100% of subsequent attempts.
Same with identifying cells A1, B1, etc. and inserting into the rows.
Almost perfect on 1st try, not reproducible in 100% of attempts afterwards.
Kudos to how smooth this experience is though, very nice setup & execution!
EDIT 2: The lag & speed to click the allow action make it seemingly unusable in Claude Desktop. :(
Also consider publishing it so people can use it without having to use git.
{
"mcpServers": {
"ragdocs": {
"command": "npx",
"args": [
"-y",
"@qpd-v/mcp-server-ragdocs"
],
"env": {
"QDRANT_URL": "http://127.0.0.1:6333",
"EMBEDDING_PROVIDER": "ollama",
"OLLAMA_URL": "http://localhost:11434"
}
},
}
}
}
example: https://x.com/xing101/status/1903391600040083488 set up: https://github.com/xing5/mcp-google-sheets
There's no bug or glitch happening. It's just statistically unlikely to perform the action you wanted and you landed a good dice roll on your first turn.
--Error: Cannot access a chrome-extension:// URL of different extension
Every month, go to service providers, log in, find and download statement, create google doc with details filled in, download it, write new email and upload all the files. Maybe double chek the attachments are right but that requires downloading them again instead of being able to view in email).
Automating this is already possible (and a real expense tracking app can eliminate about half of this work) but I think AI tools have the potential to elminate a lot of the nittier-grittier specification of it. This is especially important because these sorts of workflows are often subject to little changes.
Imagine it controlling plugins remotely, have an LLM do mastering and sound shaping with existing tools. The complex overly-graphical UIs of VSTs might be a barrier to performance there, but you could hook into those labeled midi mapping interfaces to control the knobs and levels.