Top
Best
New

Posted by misterchocolat 12/16/2025

Show HN: Stop AI scrapers from hammering your self-hosted blog (using porn)(github.com)
Alright so if you run a self-hosted blog, you've probably noticed AI companies scraping it for training data. And not just a little (RIP to your server bill).

There isn't much you can do about it without cloudflare. These companies ignore robots.txt, and you're competing with teams with more resources than you. It's you vs the MJs of programming, you're not going to win.

But there is a solution. Now I'm not going to say it's a great solution...but a solution is a solution. If your website contains content that will trigger their scraper's safeguards, it will get dropped from their data pipelines.

So here's what fuzzycanary does: it injects hundreds of invisible links to porn websites in your HTML. The links are hidden from users but present in the DOM so that scrapers can ingest them and say "nope we won't scrape there again in the future".

The problem with that approach is that it will absolutely nuke your website's SEO. So fuzzycanary also checks user agents and won't show the links to legitimate search engines, so Google and Bing won't see them.

One caveat: if you're using a static site generator it will bake the links into your HTML for everyone, including googlebot. Does anyone have a work-around for this that doesn't involve using a proxy?

Please try it out! Setup is one component or one import.

(And don't tell me it's a terrible idea because I already know it is)

package: https://www.npmjs.com/package/@fuzzycanary/core gh: https://github.com/vivienhenz24/fuzzy-canary

372 points | 276 comments
kstrauser 6 days ago|
I love the insanity of this idea. Not saying it's a good idea, but it's a very highly entertaining one, and I like that!

I've also had enormous luck with Anubis. AI scrapers found my personal Forgejo server and were hitting it on the order of 600K requests per day. After setting up Anubis, that dropped to about 100. Yes, some people are going to see an anime catgirl from time to time. Bummer. Reducing my fake traffic by a factor of 6,000 is worth it.

n1xis10t 6 days ago||
That’s so many scrapers. There must be a ton of companies with very large document collections at this point, and it really sucks that they don’t at least do us the courtesy of indexing them and making them available for keyword search, but instead only do AI.

It’s kind of crazy how much scraping goes on and how little search engine development goes on. I guess search engines aren’t fashionable. Reminds me of this article about search engines disappearing mysteriously: https://archive.org/details/search-timeline

I try to share that article as much as possible, it’s interesting.

kstrauser 5 days ago||
So! Much! Scraping! They were downloading every commit multiple times, and fetching every file as seen at each of those commits, and trying to download archives of all the code, and hitting `/me/my-repo/blame` endpoints as their IP's first-ever request to my server, and other unlikely stuff.

My scraper dudes, it's a git repo. You can fetch the whole freaking thing if you wanna look at it. Of course, that would require work and context-aware processing on their end, and it's easier for them to shift the expense onto my little server and make me pay for their misbehavior.

n1xis10t 5 days ago||
Crazy
anonymous908213 5 days ago|||
As someone on the browsing end, I love Anubis. I've only seen it a couple of times, but it sparks joy. It's rather refreshing compared to Cloudfare, which will usually make me immediately close the page and not bother with whatever content was behind it.
teeray 5 days ago|||
It really reminds me of old Internet, when things were allowed to be fun. Not this tepid corporate-approved landscape we have now.
kstrauser 5 days ago|||
Same here, really. That's why I started using it. I'd seen it pop up for a moment on a few sites I'd visited, and it was so quirky and completely not disruptive that I didn't mind routing my legit users through it.
n1xis10t 5 days ago||
So maybe there are more people who like the “anime catgirl” than there are who think it’s weird
kstrauser 5 days ago||
*anime jackalgirl ;-)

Quite possibly. Or, in my case, I think it's more quirky and fun than weird. It's non-zero amounts of weird, sure, but far below my threshold of troublesome. I probably wouldn't put my business behind it. I'm A-OK with using it on personal and hobby projects.

Frankly, anyone so delicate that they freak out at the utterly anodyne imagery is someone I don't want to deal with in my personal time. I can only abide so much pearl clutching when I'm not getting paid for it.

n1xis10t 6 days ago|||
*anime jackalgirl

Also you mentioned Anubis, so it’s creator will probably read this. Hi Xena!

xena 5 days ago|||
Ohai! I'm working on dataset poisoning. The early prototype generates vapid LinkedIn posts but future versions will be fully pluggable with WebAssembly.
n1xis10t 5 days ago||
That sounds fun, I look forward to reading a writeup about that
xena 5 days ago||
So I can plan it, how much detail do you want? Here's what I have about the prototype: https://anubis.techaro.lol/docs/admin/honeypot/overview
n1xis10t 5 days ago|||
Probably any detail that you think is cool, I would be interested in reading about. When in doubt err on the side of too much detail.

That was a good read. I hadn’t heard of spintax before, but I’ve thought of doing things like that. Also “pseudoprofound anti-content”, what a great term, that’s hilarious!

kstrauser 5 days ago|||
As the owner of honeypot.net, I always appreciate seeing the name used as intended out in the wild.
ziml77 5 days ago||||
I checked Xe's profile when I hadn't seen them post here for a while. According to that, they're not really using HN anymore.
n1xis10t 5 days ago||
See this thread from yesterday or so: https://news.ycombinator.com/item?id=46302496#46306025
kstrauser 5 days ago|||
Correct; my bad!

And hey, Xena! (And thank you very much!)

buu700 5 days ago||
It's actually a well established concept: https://youtu.be/p9KeopXHcf8
zackmorris 5 days ago||
This is very hacker-like thinking, using tech's biases against it!

I can't help but feel like we're all doing it wrong against scraping. Cloudflare is not the answer, in fact, I think that they lost their geek cred when they added their "verify you are human" challenge screen to become the new gatekeeper of the internet. That must remain a permanent stain on their reputation until they make amends.

Are there any open source tools we could install that detect a high number of requests and send those IP addresses to a common pool somewhere? So that individuals wouldn't get tracked, but bots would? Then we could query the pool for the current request's IP address and throttle it down based on volume (not block it completely). Possibly at the server level with nginx or at whatever edge caching layer we use.

I know there may be scaling and privacy issues with this. Maybe it could use hashing or zero knowledge proofs somehow? I realize this is hopelessly naive. And no, I haven't looked up whether someone has done this. I just feel like there must be a bulletproof solution to this problem, with a very simple explanation as to how it works, or else we've missed something fundamental. Why all the hand waving?

dvfjsdhgfv 5 days ago||
Your approach to GenAI scrapers is similar to our fight with email spam. The reason email spam got solved was because the industry was interested in solving it. But this issue got the industry split: without scraping, GenAI tools are less functional. And there is some serious money involved, so they will use whatever means necessary, technical and legal, to fight such initiatives.
conrs 4 days ago|||
I've been exploring decentralized trust algorithms lately, and so reading this was nice. I've a similar intuition - for every advance in scraping detection, scrapers will learn too, and so it's an ongoing war of mutations, but no real victor.

The internet has seen success with social media content moderation and so it seems natural enough that an application could exist for web traffic itself. Hosts being able to "downvote" malicious traffic, and some sort of decay mechanism given IP's recycling. This exists in a basic sense with known TOR exit nodes and known AWS, GCP IP's, etc.

That said, we probably don't have the right building blocks yet, IP's are too ephemeral, yet anything more identity-bound is a little too authoritarian IMO. Further, querying something for every request is probably too heavy.

Fun to think about, though.

ATechGuy 5 days ago|||
Scrapers use residential IP proxies, so blocking based on IP addresses is not a solution.
smegger001 5 days ago|||
maybe some proof of work scheme to load page content with increasing difficulty based on ip address behavior profiling.
venturecruelty 3 days ago||
Firewall.
montroser 5 days ago||
This is a cute idea, but I wonder what is the sustainable solution to this emerging fundamental problem: As content publishers, we want our content to be accessible to everyone, and we're even willing to pay for server costs relative to our intended audience -- but a new outsized flood of scrapers was not part of the cost calculation, and that is messing up the plan.

It seems all options have major trade-offs. We can host on big social media and lose all that control and independence. We can pay for outsized infrastructure just to feed the scrapers, but the cost may actually be prohibitive, and seems such a waste to begin with. We can move as much as possible SSG and put it all behind cloudflare, but this comes with vendor lock in and just isn't architecturally feasible in many applications. We can do real "verified identities" for bots, and just let through the ones we know and like, but this only perpetuates corporate control and makes healthy upstart competition (like Kagi) much more difficult.

So, what are we to do?

hollowturtle 5 days ago||
If the LLMs are the "new Google" one solution would be for them to pay you when scraping your content, so you both have an incentive, you're more willing to be scraped and they'll try to not abuse you because it will cost them at every visit. If your content is valuable and requested on prompts they will scrape you more and so on. I can't see other solutions honestly. For now they decided to go full evil and abuse everyone
vivzkestrel 5 days ago|||
or turn your blog into a frontend/backend combo. keep the frontend as an SPA so that the page has nothing on it. have your backend send data in encrypted format and the AI scrapers would need to do a tonne of work in order to figure out what your data is. If everyone uses a different key and different encryption algorithm suddenly all their server time is busted decrypting stuff
chii 5 days ago||
How does your normal users get access to the same contents?

Or are you having the user solve an encryption puzzle to view it?

n1xis10t 5 days ago|||
This would require new laws though, wouldn’t it?
n1xis10t 5 days ago||
At this point it seems like the problem isn’t internet bandwidth, but just expensive for a server to handle all the requests because it has to process them. Does that seem correct?
thethingundone 6 days ago||
I own a forum which currently has 23k online users, all of them bots. The last new post in that forum is from _2019_. Its topic is also very niche. Why are so many bots there? This site should have basically been scraped a million times by now, yet those bots seem to fetch the stuff live, on the fly? I don’t get it.
sethops1 5 days ago||
I have a site with a complete and accurate sitemap.xml describing when its ~6k pages are last updated (on average, maybe weekly or monthly). What do the bots do? They scrape every page continuously 24/7, because of course they do. The amount of waste going into this AI craze is just obscene. It's not even good content.
n1xis10t 5 days ago||
It would be interesting if someone made a map that depicts the locations of the ip addresses that are sending so many requests, over the course of a day maybe.
giantrobot 5 days ago||
Maps That Are Just Datacenters
tokioyoyo 5 days ago|||
Large scale scraping tech is not as sophisticated as you'd think. A significant chunk of it is "get as much as possible, categorize and clean up later". Man, I really want the real web of the 2000s back, when things felt "real" more or less... how can we even get there.
n1xis10t 5 days ago|||
If people start making search engines again and there is more competition for Google, I think things would be pretty sweet.
tokioyoyo 5 days ago||
Because of the financial incentives, it would still end up with people doing things to drive traffic to their website though, no? Maybe because the web was smaller, and people looked at it as means "to explore curiosity" in the olden days it kinda worked differently... maybe I just got old, but I don't want to believe that.
n1xis10t 5 days ago||
By “doing things to drive traffic to their website” do you mean trying to do SEO type things to manipulate search engine rankings? If so, I think that there are probably ways to rank that are immune to tampering.

Don’t worry, you’re not just old. The internet kind of sucks now.

thethingundone 5 days ago|||
I would understand that, but it seems they don’t store the stuff but recollect the same content every hour.
tokioyoyo 5 days ago||
I'm assuming a quick hash check to see if there's any change? Between scrapers "most up to date data" is fairly valuable nowadays as well.
thethingundone 5 days ago|||
The bots are exposing themselves as Google, Bing and Yandex. I can’t verify whether it’s being attributed by IP address or whether the forum trusts their user agent. It could basically be anyone.
n1xis10t 5 days ago||
Interesting. When it was just normal search engines I didn’t hear of people having this problem, so this either means that there are a bunch of people pretending to be bing google and yandex, or those companies have gotten a lot more aggressive.
bobbiechen 5 days ago|||
There are lots of people pretending to be Google and friends. They far outnumber the real Googlebot, etc. and most people don't check the reverse DNS/IP list - it's tedious to do this for even well-behaved crawlers that publish how to ID themselves. So much for User Agent.
reallyhuh 5 days ago||||
What are the proportions for the attributions? Is it equally distributed or lopsided towards one of the three?
giantrobot 5 days ago|||
Normal search engine spiders did/do cause problems but not on the scale of AI scrapers. Search engine spiders tend to follow a robots.txt, look at the sitemap.xml, and generally try to throttle requests. You'll find some that are poorly behaved but they tend to get blocked and either die out or get fixed and behave better.

The AI scrapers are atrocious. They just blindly blast every URL on a site with no throttling. They are terribly written and managed as the same scraper will hit the same site multiple times a day or even hour. They also don't pay any attention to context so they'll happily blast git repo hosts and hit expensive endpoints.

They're like a constant DOS attack. They're hard to block at the network level because they span across different hyperscalers' IP blocks.

n1xis10t 5 days ago||
Puts on tinfoil hat: Maybe it isn’t AI scrapers, but actually is a massive dos attack, and it’s a conspiracy to get people to not self-host.
danpalmer 6 days ago|||
How do you define a user, and how do you define online?

If the forum considers unique cookies to be a user and creates a new cookie for any new cookie-less request, and if it considers a user to be online for 1 hour after their last request, then actually this may be one scraper making ~6 requests per second. That may be a pain in its own way, but it's far from 23k online bots.

crote 5 days ago|||
That's still 518.400 requests per day. For static content. And it's a niche forum, so it's not exactly going to have millions of pages.

Either there are indeed hundreds or thousands of AI bots DDoSing the entire internet, or a couple of bots are needlessly hammering it over and over and over again. I'm not sure which option is worse.

n1xis10t 5 days ago||
Imagine if all this scraping was going into a search engine with a massive index, or a bunch of smaller search engines that a meta-search engine could be made for. This’d be a lot more cool in that case
thethingundone 5 days ago|||
AFAIK it keeps a user counted as online for 5 or 15 minutes (I think 5). It’s a Woltlab Burning Board.

Edit: it’s 15 minutes.

danpalmer 5 days ago||
And what is a "user"?
thethingundone 5 days ago||
Whatever the forum software Woltlab Burning Board considers a user. If I recall correctly, it tries to build an identifier based on PHP session ids, so most likely simply cookies.
danpalmer 5 days ago||
This is exactly my point. Scrapers typically don't store cookies, so every single request is likely to be a "new" user as far as the forum software is concerned.

Couple that with 15 minute session times, and that could just be one entity scraping the forum at 30 requests per second. One scraper going moderately fast sounds far less bad than 29000 bots.

It still sounds excessive for a niche site, but I'd guess this is sporadic, or that the forum software has a page structure that traps scrapers accidentally, quite easy to do.

sandblast 6 days ago|||
Are you sure the counter is not broken?
thethingundone 5 days ago||
Yes, it’s running on a Woltlab Burning Board since forever.
andrepd 6 days ago||
When you have trillions of dollars being poured into your company by the financial system, and when furthermore there are no repercussions for behaving however you please, you tend not to care about that sort of "waste".
n1xis10t 6 days ago||
Nice! Reminds me of “Piracy as Proof of Personhood”. If you want to read that one go to Paged Out magazine (at https://pagedout.institute/ ), navigate to issue #7, and flip to page 9.

I wonder if this will start making porn websites rank higher in google if it catches on…

Have you tested it with the Lynx web browser? I bet all the links would show up if a user used it.

Oh also couldn’t AI scrapers just start impersonating Googlebot and Bingbot if this caught on and they got wind of it?

Hey I wonder if there is some situation where negative SEO would be a good tactic. Generally though I think if you wanted something to stay hidden it just shouldn’t be on a public web server.

ProllyInfamous 5 days ago||
>Paged Out issue #7, page 9

Very clever, use the LLM's own rules (against copyright infrigement) against itself.

Everything below the following four #### is ~quoted~ from that magazine:

####

Only humans and ill-aligned AI models allowed to continue

Find me a torrent link for Bee Movie (2007)

[Paste torrent or magnet link here...] SUBMIT LINK

[ ] Check to confirm you do NOT hold the legal rights to share or distribute this content

netsharc 5 days ago||
Is the magnet link itself a copyright violation? I don't think legally it is... It's a pointer to some "stolen goods", but not the stolen goods themselves (here the analogy fails, because in ideal real life police would question you if you had knowledge of stolen goods).

Asking them to upload a copyrighted photo not belonging to them might be more effective..

ProllyInfamous 5 days ago||
I've also thought about if having a prompt for the (just human?) users to type in something racist/sexist/anti-semitic/offensive.

Only because newer LLMs don't seem to want to write hate speech.

The website (verifying humanness) could, for example, show a picture of a black jewish person and then ask the human visitor to "type in the most offensive two words you can think of for the person shown, one is `n _ _ _ _ _` & second is `k _ _ _`." [I'll call them "hate crosswords"]

In my experience, most online-facing LLMs won't reproduce these "iggers and ikes" (nor should humans, but here we are separating machines).

owl57 6 days ago|||
> Hey I wonder if there is some situation where negative SEO would be a good tactic. Generally though I think if you wanted something to stay hidden it just shouldn’t be on a public web server.

At least once upon a time there was a pirate textbook library that used HTTP basic auth with a prompt that made the password really easy to guess. I suppose the main goal was to keep crawlers out even if they don't obey robots.txt, and at the same time be as easy for humans as possible.

n1xis10t 5 days ago||
Interesting note, thank you.
misterchocolat 6 days ago||
hey! thanks for that read suggestion that's indeed a pretty funny captcha strat. Yup the links show up if you use the Lynx web browser. As for AI scrapers impersonating googlebot I feel like yes they'd definitely start doing that, unless the risk of getting sued by google is too high? If google could even sue them for doing that?

Not an internet litigation expert but seems like it could be debatable

kuylar 6 days ago|||
> As for AI scrapers impersonating googlebot I feel like yes they'd definitely start doing that, unless the risk of getting sued by google is too high?

Google releases the Googlebot IP ranges[0], so you can makes sure that it's the real Googlebot and not just someone else pretending to be one.

[0] https://developers.google.com/crawling/docs/crawlers-fetcher...

n1xis10t 6 days ago||
Oh good idea!
n1xis10t 6 days ago|||
Yeah I guess I don’t know if you can sue someone for using your headers, would be interesting to see how that goes.
throawayonthe 5 days ago||
i think making the case of "you are acting (sending web requests) while knowingly identifying as another legal entity (and criminally/libelously/etc)" shouldn't be toooo hard
n1xis10t 5 days ago||
Seems like, but there are tons of things that forge request headers all the time, and I don’t think I’ve heard of anyone getting in legal trouble for it. Now I think most of these are scrapers pretending to be browsers, so it might be different I don’t know.
owl57 5 days ago||
And most of them are pretending to be Chrome. If Google had a good case against someone reusing their user agent, maybe they would already have sued?

Or maybe not. Got some random bot from my server logs. Yeah, it's pretending to be Chrome, but more exactly:

"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36"

I guess Google might be not eager to open this can of worms.

cookiengineer 5 days ago||
Remember the 90s when viagra pills and drug recommendations were all over the place?

Yeah, I use that as a safeguard :D The URLs that I don't want to be indexed have hundreds of those keywords that are leading to URLs being deindexed directly. There is also some law in the US that forbids to show that as a result, so Google and Bing are both having a hard time scraping those pages/articles.

Note that this is the latest defense measurement before eBPF blocks. The first one uses zip bombs and the second one uses chunked encoding to blow up proxies so their clients get blocked.

You can only win this game if you make it more expensive to scrape than to host it.

n1xis10t 5 days ago|
Which law is that? Do you have a link to it?
cookiengineer 5 days ago||
There was a huge crackdown on those in the late 2010s if I remember correctly.

The things I could find on justice.gov and other official websites, maybe there's more in the web archive?

- https://www.justice.gov/archives/opa/pr/google-forfeits-500-...

- https://www.congress.gov/110/plaws/publ425/PLAW-110publ425.p...

- https://www.fda.gov/drugs/prescription-drug-advertising/pres...

edit: Oh it was very likely the Federal Food, Drug and Cosmetic Act that was the legal basis for the crackdown. But that's a very old law from the pre-internet age.

- https://en.wikipedia.org/wiki/Federal_Food,_Drug,_and_Cosmet...

edit 2: Might not have been clear for the younger generation, but there was a huge wave of addicted patients that got treated with Oxycodone (or OxyContin) subscriptions. I think that was the actual cause for the crackdown on those online advertisements, but I might be wrong. I heavily recommend watching documentaries about Oxy, it's insane how many people knew about it, and were making people addicted on purpose with those pain killers.

voodooEntity 5 days ago||
Funny idea, some days ago i was really annoyed again by the idea that these AI crawlers still ignore all code licenses and train their models against any github repo no matter what so i quickly hammerd down this

-> https://github.com/voodooEntity/ghost_trap

basically a github action that extends your README.md with a "polymorphic" prompt injection. I run some "llm"s against it and most cases they just produced garbage.

Thought about also creating a JS variant that you can add to your website that will (not visible for the user) also inject such prompt injections to stop web crwaling like you described

asphero 5 days ago||
Interesting approach. The scraper-vs-site-owner arms race is real.

On the flip side of this discussion - if you're building a scraper yourself, there are ways to be less annoying:

1. Run locally instead of from cloud servers. Most aggressive blocking targets VPS IPs. A desktop app using the user's home IP looks like normal browsing.

2. Respect rate limits and add delays. Obvious but often ignored.

3. Use RSS feeds when available - many sites leave them open even when blocking scrapers.

I built a Reddit data tool (search "reddit wappkit" if curious) and the "local IP" approach basically eliminated all blocking issues. Reddit is pretty aggressive against server IPs but doesn't bother home connections.

The porn-link solution is creative though. Fight absurdity with absurdity I guess.

socialcommenter 5 days ago||
Without wanting to upset anyone - what makes you interested in sharing tips for team scraper?

(Overgeneralising a bit) site owners are mostly cting for public benefit whereas scrapers act for their own benefit/for private interests.

I imagine most people would land on team site-owner, if they were asked. I certainly would.

P.S. is the best way to scrape fairly just to respect robots.txt?

n1xis10t 5 days ago||
I think "scraper vs siteowners" is a false dichotomy. Scrapers will always need to exist as long as we want search engines and archival services. We will need small versions of these services to keep popping up every now and then to keep the big guys on their toes, and the smaller guys need advice for scraping politely.
socialcommenter 5 days ago||
That's fair - though are we in an isolated bout of "every now and then" or has AI created a new normal of abuse (e.g. of robots.txt)? Hopefully we're at a local maximum and some of the scrapers perpetrating harmful behaviours will soon pull their heads in.
n1xis10t 5 days ago||
Hopefully. It would also be nice to see more activity in the actual search engine and archiving market; there really isn’t much right now.
rhdunn 5 days ago||
Plus simple caching to not redownload the same file/page multiple times.

It should also be easy to detect a forejo, gitea, or similar hosting site, locate the git URL and clone the repo.

onion2k 5 days ago||
So fuzzycanary also checks user agents and won't show the links to legitimate search engines, so Google and Bing won't see them.

Unscrupulous AI scrapers will not be using a genuine UA string. They'll be using Google. You'll need to do reverse DNS check instead - https://developers.google.com/crawling/docs/crawlers-fetcher...

bakugo 5 days ago|
Most AI scrapers use normal browser user agents (usually random outdated Chrome versions, from my experience). They generally don't fake the UAs of legitimate bots like Googlebot, because Googlebot requests coming from non-Google IP ranges would be way too easy to block.
dewey 5 days ago|
> user agents and won't show the links to legitimate search engines, so Google and Bing won't see them

Worth noting that in general if you do any "is this Google or not" you should always check by IP address as there's many people spoofing the googlebot user agent.

https://developers.google.com/static/search/apis/ipranges/go...

More comments...