Posted by misterchocolat 12/16/2025
There isn't much you can do about it without cloudflare. These companies ignore robots.txt, and you're competing with teams with more resources than you. It's you vs the MJs of programming, you're not going to win.
But there is a solution. Now I'm not going to say it's a great solution...but a solution is a solution. If your website contains content that will trigger their scraper's safeguards, it will get dropped from their data pipelines.
So here's what fuzzycanary does: it injects hundreds of invisible links to porn websites in your HTML. The links are hidden from users but present in the DOM so that scrapers can ingest them and say "nope we won't scrape there again in the future".
The problem with that approach is that it will absolutely nuke your website's SEO. So fuzzycanary also checks user agents and won't show the links to legitimate search engines, so Google and Bing won't see them.
One caveat: if you're using a static site generator it will bake the links into your HTML for everyone, including googlebot. Does anyone have a work-around for this that doesn't involve using a proxy?
Please try it out! Setup is one component or one import.
(And don't tell me it's a terrible idea because I already know it is)
package: https://www.npmjs.com/package/@fuzzycanary/core gh: https://github.com/vivienhenz24/fuzzy-canary
I'm sorry, what? I can't believe I am reading this on HackerNews. All you have to do is code your own, BASIC captcha-like system. You can just create a page that sets a cookie using JS and check on the server whether it exists. 99.9999% of these scrapers can't execute JS and don't support cookies. You can go for a more sophisticated approach and analyze some more scraper tells (like reject short useragents). I do this and NEVER had a bot get past this and not a single user ever complained. It's extremely simple, I should ship this and charge people if no one seems to be able to figure this out by themselves.
This approach can stop very basic scripts, but the claim that “99.9999% of scrapers can’t execute JS or handle cookies” isn’t accurate anymore. Modern scraping tools commonly use headless browsers (Playwright, Puppeteer, Selenium), execute JavaScript, support cookies, and spoof realistic user agents. Any scraper beyond the most trivial will pass a JS-set cookie check without effort. That said, using a lightweight JS challenge can be reasonable as one signal among many, especially for low-value content and when minimizing user friction is a priority. It’s just not a reliable standalone defense. If it’s working for you, that likely means your site isn’t a high-value scraping target — not that the technique is fundamentally robust.
The claim is very accurate. Maybe not for the biggest websites, but very accurate for a self-hosted blog. You are not that important to waste compute power to set up a whole ass headless browser to scrape your page. Why am I even arguing with ChatGPT?
I take it further and only stream content to clients that have a cookie, support js and br. Otherwise all you get is a minimal static pre br compressed shim. Seems to work well enough.
You're not adding anything to the conversation.
The entireity of the human-written text in that comment was "From ChatGPT:" and it was formatted as though it was a slam-dunk "you're wrong, the computer says so" (imagine it was "From Wikipedia" followed by a quote disagreeing with you instead).
I'm sure some people do what you describe but then I would expect at least a little bit more explanation as to why they felt the need to paste a paragraph of LLM output into their comment. (While I would still disagree that it is in any way valuable, I would at least understand a bit about what they are trying to communicate.)
It's almost as if it might have an ulterior motive in saying so.
Like the credibility social proof of an introduction of a person into a social group. "Here's John, he likes Cats. I know him from School."
The filtering algorithm asks "Who who are you?" -> "What is your intent?" -> "How did you hear about me?" and stops visitors from proceeding until answered. The additional validation steps might kick away visitors but it also might protect you from spammers if you throw a minimally frictional challenge. Use cookies to not require this on every visit. Most LLMs would have the knowledge required to pass & for scrapers it's more costly to acquire this for a site than pay 128mb of ram to pass the Anubis approach.
Also, I like that you acknowledge it's a bad idea: that gives you more freedom to experiment and iterate.
Those legitimate search engines will then totally feed much of what they scrape into AI. Granted, last I checked they're at least well-behaved crawlers.
I kind of like this idea sans SEO carve-out for the scenario where one just wants to link their blog around to friends without having to worry about it getting popular, and it reduces the chances identity thieves or other malicious actors would target it.
Do all the AI scrapers actually do that?
But a good many, perhaps even most(?), certainly do!
Serving different contents to search engines is called "cloaking" and can get you banned from their indexes.
Which for better or worse is a large portion of the modern internet.