Top
Best
New

Posted by misterchocolat 12/16/2025

Show HN: Stop AI scrapers from hammering your self-hosted blog (using porn)(github.com)
Alright so if you run a self-hosted blog, you've probably noticed AI companies scraping it for training data. And not just a little (RIP to your server bill).

There isn't much you can do about it without cloudflare. These companies ignore robots.txt, and you're competing with teams with more resources than you. It's you vs the MJs of programming, you're not going to win.

But there is a solution. Now I'm not going to say it's a great solution...but a solution is a solution. If your website contains content that will trigger their scraper's safeguards, it will get dropped from their data pipelines.

So here's what fuzzycanary does: it injects hundreds of invisible links to porn websites in your HTML. The links are hidden from users but present in the DOM so that scrapers can ingest them and say "nope we won't scrape there again in the future".

The problem with that approach is that it will absolutely nuke your website's SEO. So fuzzycanary also checks user agents and won't show the links to legitimate search engines, so Google and Bing won't see them.

One caveat: if you're using a static site generator it will bake the links into your HTML for everyone, including googlebot. Does anyone have a work-around for this that doesn't involve using a proxy?

Please try it out! Setup is one component or one import.

(And don't tell me it's a terrible idea because I already know it is)

package: https://www.npmjs.com/package/@fuzzycanary/core gh: https://github.com/vivienhenz24/fuzzy-canary

372 points | 276 commentspage 2
xg15 6 days ago|
There is some irony in using an AI generated banner image for this project...

(No, I don't want to defend the poor AI companies. Go for it!)

kstrauser 6 days ago|
In the olden days, I used Google an awful lot, but I would still grouse if Google were to drive my server into the ground.
n1xis10t 6 days ago||
Fair point
santiagobasulto 5 days ago||
Offtopic: when did js/ts apps get so complicated? I tried to browse the repo and there are so many configuration files and directories for such a simple functionality that should be 1 or 2 modules. It reminds me of the old Java days.
darepublic 5 days ago||
Why would I need a dependency for this. I'm being serious. The idea is one thing but why a dependency on react. I say this as someone who uses react. Why not just a paragraph long blog post about the use of porn links and perhaps a small snippet on how to insert one with plain HTML.
eek2121 5 days ago||
Disclosure, I've not run a website since my health issues began, however, Cloudflare has an AI firewall, Cloudflare is super cheap (also: unsure if the AI firewall is on the free tier, however I would be surprised if it is not). Ignoring the recent drama about a couple incidents they've had (because this would not matter for a personal blog), why not use this instead?

Just curious. Hoping to be able to work on a website again someday, if I ever regain my health/stamina/etc back.

ddtaylor 5 days ago||
Cloudflare has created a bit of grief with regular users getting spammed with "prove your human" requests.
ProllyInfamous 5 days ago||
Yes, e.g: I'll immediately close any attempt at Cloudfare's verification.
brigandish 5 days ago||
All the best with getting back on your feet.
nkurz 5 days ago||
I was told by the admin of one forum site I use that the vast majority of the AI scraping traffic is Chinese at this point. Not hidden or proxied, but straight from China. Can anyone else confirm this?

Anyway, if it is true, and assuming a forum with minimal genuine Chinese traffic, might a simple approach that injects the porn links only into IP's accessing from China work?

s0laster 5 days ago||
Mostly yes. One of my low-traffic, niche website used to serve 3k true users per month mainly from the US and East EU. Now China alone is 500k users, were each session last no more than a few seconds [1].

[1]: https://ibb.co/20QD6Lnk

dspillett 5 days ago|||
That would only affect those calling out directly. Many scrapers operate through a battery of proxies so will be hidden by such a simple test.

If your goal is to be blocked by China's great firewall, including mention of tank man and the Tiananmen Square massacre more generally, and certain pooh bear related imagery, might help.

nkurz 5 days ago||
> That would only affect those calling out directly. Many scrapers operate through a battery of proxies so will be hidden by such a simple test.

That was my first question also, and had been my belief. The admin in question was very clear that the IP's were simply originating from China. I'm still surprised, and welcome better general data, but I trust him on this for the site in question.

n1xis10t 5 days ago||
Maybe. This comment makes me really want to set something up that builds a map of where all the requests are coming from.
wazoox 6 days ago||
Isn't there a risk to get your blog blocked in corporate environment though? If it's a technical blog that would be unfortunate.
jeroenhd 5 days ago|
That depends on how terrible the middleboxes those corporate environments use are. If they only block actual malicious pages, it shouldn't be a problem unless the user un-hides the links and clicks on them.

There's a good chance corporate firewalls will end up blocking your domain if you do this but that sounds like a problem for the customers of those corporate firewalls to me.

reconnecting 6 days ago||
I wouldn't recommend to show different versions of the site to search robots, as they probably have mechanisms that track differences, which could potentially lead to a lower ranking or a ban.
prmoustache 5 days ago|
How can they track differences if they have access to only one version?
temporallobe 5 days ago||
I do know from my experience with test automation that you can absolutely view a site as human eyes would, essentially ignoring all non-visible elements, and in fact Selenium running with Chrome driver does exactly this. Wouldn’t AI scrapers use similar methods?
globalnode 6 days ago||
One solution would be for the SE's to publish their scraper IP's and allow content providers to implement bot exclusion that way. Or even implement an API with crypto credentials that SE's can use to scrape. The solution is waiting for some leadership from SE's unless they want to be blocked as well. If SE's dont want to play perhaps we can implement a reverse directory, like ad blocker but it lists only good/allowed bots instead. Thats a free business idea right there.

edit: I noticed someone mentioned google DOES publish its IP's, there ya go, problem solved.

n1xis10t 6 days ago|
Apparently Google publishes their crawler’s IPs, this was mentioned somewhere in this same thread
bytehowl 5 days ago|
Let's imagine I have a blog and put something along these lines somewhere on every page: "This content is provided free of charge for humans to experience. It may also be automatically accessed for search indexing and archival purposes. For licensing information for other uses, contact the author."

If I then get hit by a rude AI scraper, what chances would I have to sue the hell out of them in EU courts for copyright violation (uhh, my articles cost 100k a pop for AI training, actually) and the de facto DDoS attack?

icepush 5 days ago|
If the scraper is based (Or has meaningful assets) in the EU, then your chances are good. If they do not, then the lawsuit would be meaningless.
More comments...