Top
Best
New

Posted by todsacerdoti 1 day ago

How I protect my Forgejo instance from AI web crawlers(her.esy.fun)
166 points | 84 comments
pedrozieg 19 hours ago|
What I like about this approach is that it quietly reframes the problem from “detect AI” to “make abusive access patterns uneconomical”. A simple JS+cookie gate is basically saying: if you want to hammer my instance, you now have to spin up a headless browser and execute JS at scale. That’s cheap for humans, expensive for generic crawlers that are tuned for raw HTTP throughput.

The deeper issue is that git forges are pathological for naive crawlers: every commit/file combo is a unique URL, so one medium repo explodes into Wikipedia-scale surface area if you just follow links blindly. A more robust pattern for small instances is to explicitly rate limit the expensive paths (/raw, per-commit views, “download as zip”), and treat “AI” as an implementation detail. Good bots that behave like polite users will still work; the ones that try to BFS your entire history at line rate hit a wall long before they can take your box down.

nucleardog 18 hours ago||
Yeah, this is where I landed a while ago. What problem am I _really_ trying to solve?

For some people it's an ideological one--we don't want AI vacuuming up all of our content. For those, "is this an AI user?" is a useful question to answer. However it's a hard one.

For many the problem is simply "there are a class of users that are putting way too much load on the system and it's causing problems". Initially I was playing wack-a-mole with this and dealing with alerts firing on a regular basis because of Meta crawling our site very aggressively, not backing off when errors were returned, etc.

I looked at rate limiting but the work involved in distributed rate limiting versus the number of offenders involved made the effort look a little silly, so I moved towards a "nuke it from orbit" strategy:

Requests are bucketed by class C subnet (31.13.80.36 -> 31.13.80.x) and request rate is tracked over 30 minute windows. If the request rate over that window exceeds a very generous threshold I've only seen a few very obvious and poorly behaved crawlers exceed it fires an alert.

The alert kicks off a flow where we look up the ASN covering every IP in that range, look up every range associated with those ASNs, and throw an alert in Slack with a big red "Block" button attached. When approved, the entire ASN is blocked at the edge.

It's never triggered on anything we weren't willing to block (e.g., a local consumer ISP). We've dropped a handful of foreign providers, some "budget" VPS providers, some more reputable cloud providers, and Facebook. It didn't take long before the alerts stopped--both for high request rates and our application monitoring seeing excessive loads.

If anyone's interested in trying to implement something similar, there's a regularly updated database of ASN <-> IP ranges announced here: https://github.com/ipverse/asn-ip

embedding-shape 17 hours ago|||
> If anyone's interested in trying to implement something similar, there's a regularly updated database of ASN <-> IP ranges announced here: https://github.com/ipverse/asn-ip

What exactly is the source of these mappings? Never heard about ipverse before, seems to be a semi-anonymous GitHub organization and their website has had a failing certificate for more than a year by now.

cmrx64 16 hours ago||
whois (delegation files) according to the embedded blog post, eg https://ftp.arin.net/pub/stats/arin/delegated-arin-extended-...
sgc 17 hours ago||||
You ban the ASN permanently in this scenario?
doctorpangloss 15 hours ago|||
i don't know. use PAT. the long term solution is web environment integrity by another name.
hombre_fatal 14 hours ago|||
It depends what your goal is.

Having to use a browser to crawl your site will slow down naive crawlers at scale.

But it wouldn't do much against individuals typing "what is a kumquat" into their local LLM tool that issues 20 requests to answer the question. They're not really going to care nor notice if the tool had to use a playwright instance instead of curl.

Yet it's that use-case that is responsible for ~all of my AI bot traffic according to Cloudflare which is 30x the traffic of direct human users. In my case, being a forum, it made more sense to just block the traffic.

ethmarks 14 hours ago||
Maybe a stupid question but how can Cloudflare detect what portion of traffic is coming from LLM agents? Do agents identify themselves when they make requests? Are you just assuming that all playwright traffic originated from an agent?
pm215 16 hours ago|||
I'm curious about whether there are well coded AI scrapers that have logic for "aha, this is a git forge, git clone it instead of scraping, and git fetch on a rescrape". Why are there apparently so many naive (but still coded to be massively parallel and botnet like, which is not naive in that aspect) crawlers out there?
ffsm8 15 hours ago|||
I'm not an industry insider and not the source of this fact, but it's been previously stated that traffic costs to fetch the current data for each training run is cheaper then caching it in any way locally - wherever it's a git repo, static sites or any other content available through http
pm215 15 hours ago||
This seems nuts and suggests maybe the people selling AI scrapers their bandwidth could get away with charging rather more than they do :)
ncruces 7 hours ago||||
If they're handling it as “website, don't care” (because they're training on everything online) they won't know.

If they're treating it specifically on “code forge” (because they're after coding use cases), there's lots of interesting information that you won't get by just cloning a repo.

It's not just the current state of the repo, or all commits (and their messages). It's the initial issue (and discussion) that lead to a pull request (and review comments) that eventually gets squashed into a single commit.

The way you code with an agent is a lot more similar to the: issue, comments, change, review, refinement sequence; that you get by slurping the website.

telliott1984 14 hours ago||||
I'd see this as coming down to incentive. If you can scrape naively and it's cheap, what's the benefit to you in doing something more efficient for git forge? How many other edge cases are there where you could potentially save a little compute/bandwidth, but need to implement a whole other set of logic?

Unfortunately, this kind of scraping seems to inconvenience the host way more than the scraper.

Another tangent: there probably are better behaved scrapers, we just don't notice them as much.

the_biot 15 hours ago||||
True, and it doesn't get mentioned enough. These supposedly world-changing advanced tech companies sure look sloppy as hell from here. There is no need for any of this scraping.
LtWorf 9 hours ago|||
I guess they're vibe coded :D
agumonkey 12 hours ago||
what's next: you can only read my content after mining btc and wiring it to $wallet->address
BLKNSLVR 20 hours ago||
I really don't know how effective my little system would be against these scrapers, but I've setup a system that blocks IP addresses if they've attempted to connect to ports on my system(s) behind which there are no services, and therefore their connections must be 'uninvited', which I classify as malicious.

Since I do actually host a couple of websites / services behind port 443, it means I can't just block everything that tries to scan my ip address at port 443. However, I've setup Cloudflare in front of those websites, so I do log and block any non-Cloudflare (using Cloudflare's ASN: 13335) traffic coming into port 443.

I also log and block IP address attempting to connect on port 80, since that essentially deprecated.

This, of course, does not block traffic coming via the DNS names of the sites, since that will be routed through Cloudflare - but as someone mentioned, Cloudflare has its own anti-scraping tools. And then as another person mentioned, this does require the use of Cloudflare, which is a vast centralising force on the Internet and therefore part of a different problem...

I don't currently split out a separate list for IP addresses that have connected to HTTP(S) ports, but maybe I'll do that over Christmas.

This is my current simple project: https://github.com/UninvitedActivity/UninvitedActivity

Apologies if the README is a bit rambling. It's evolved over time, and it's mostly for me anyway.

P.S. I always thought it was Yog Sothoth (not Sototh). Either way, I'm partial to Nyarlathotep. "The Crawling Chaos" always sounded like the coolest of the elder gods.

mmarian 1 hour ago||
Some scrapers/scanners use residential IPs. Aren't you worried you'll end up blocking legitimate traffic?
ewpratten 20 hours ago||
Regarding the Cloudflare part of this, I’d recommend taking a look at “Authenticated Origin Pulls”. It lets you perform your validation at the TLS layer instead of doing it with IP ACLs if that interests you.
Simplita 21 hours ago||
We ran into similar issues with aggressive crawling. What helped was rate limiting combined with making intent explicit at the entry point, instead of letting requests fan out blindly. It reduced both load and unexpected edge cases.
cheshire_cat 21 hours ago|
What do you mean by "making intent explicit at the entry point"?
maelito 23 hours ago||
I'm having lots of connections every day from Singapor. It's now the main country... despite the whole website being French-only. AI crawlers, for sure.

Thanks for this tip.

arjie 23 hours ago||
Amazonbot does this despite my efforts in robots.txt to help it out. I look at all the Singapore requests and they’re Amazonbot trying to get various variants of the Special:RecentChanges page. You’re wasting your time, Amazonbot. I’m trying to help you.
reconnecting 21 hours ago||
Did you check IP address of this UA?
arjie 5 hours ago||
Yeah, a while ago, they're all Singapore reporting Amazonbot. Here is an example request:

    "GET /w/A_Wedding_at_City_Hall HTTP/1.1" 200 9677 "-" "Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot) Chrome/119.0.6045.214 Safari/537.36"
The actual IP is in X-Forwarded-For and I didn't keep that.
input_sh 22 hours ago|||
Fun fact: you don't get rid of them even when you put a captcha on all visitors from Singapore. I still see a spike in traffic that perfectly matches the spike in served captchas, but this time it's geographically distributed between places like Iraq, Bangladesh and Brazil.

Hopefully it at least costs them a little bit more.

reconnecting 21 hours ago||
Usually, there are multiple layers of different counter-protection measures. If you block by country, they shift to different IP ranges, if you block by IP, they might use a new IP for every request, and escalate further depending on the bot owner and your actions.
sunaookami 15 hours ago||
Yeah same for my Gitea instance. These were all ByteDance and Tencent ASNs from some AWS-equivalent. Blocked the whole subnet belonging to them in my server's ufw and haven't had any problems since then. Same for Vultr and Google Cloud.
flexagoon 16 hours ago||
Oh hey, I wrote the "you don't need anubis" post you (or the post author, if that's not you) got inspiration from! Glad to hear it helped!
andai 22 hours ago||
Can someone help me understand where all this traffic is coming from? Are there thousands of companies all doing it simultaneously? How come even small sites get hammered constantly? At some point haven't you scraped the whole thing?
marginalia_nu 20 hours ago||
> At some point haven't you scraped the whole thing?

Git forges will expose a version of every file at every commit in the project's history. If you have medium sized project consisting of say 1000 files and 10,000 commits, the crawler will identify a number of URLs on the same order of magnitude as English Wikipedia, just for that one project. This is also very expensive for the git forge, as it needs to reconstruct the historical files from a bunch of commits.

Git forges interact spectacularly poorly with naively implemented web crawlers, unless the crawlers put in logic to avoid exhaustively crawling git forges. You honestly get a pretty long way just excluding URLs with long base64-like path elements, which isn't hard but it's also not obvious.

input_sh 21 hours ago|||
> How come even small sites get hammered constantly?

Because big sites have decades of experience fighting against scrapers and have recently upped their game significantly (even when doing so carries some SEO costs) so that they're the only ones that can train AI on their own data.

So now, when you're starting from scratch and your goal is to gather as much data as possible, targetting smaller sites with weak / non-existent scraping protection is the path of least resistence.

andai 18 hours ago||
No I meant like, if you have a blog with 10 posts.. do they just scrape the same 10 pages thousands of times?

Because people are reporting constant traffic, which would imply that the site is being scraped millions of times per year. How does that make any sense? Are there millions of AI companies?

marcthe12 17 hours ago||
Basically the scrappers do not bother to cache your website or if they do, with an insanely low ttl. Also they do not specialize the content. So the worst hit sites are something like git hosting due the bfs style scrape (every link). The worst part is alot of this is done via tunneling so ip can be different each time or from residential ops. Which makes it annoying.
bingo-bongo 22 hours ago|||
AI companies scrape to:

- have data to train on

- update the data more or less continuously

- answer queries from users on the fly

With a lot of AI companies, that generates a lot of scraping. Also, some of them behave terribly when scraping or is just bad at it.

adastra22 21 hours ago||
Why don’t they scrape once though?
blell 21 hours ago||
1) It may be out of date 2) Storing it costs money
reppap 21 hours ago|||
It's not just companies either, a lot of people run crawlers for their home lab projects too.
m0llusk 19 hours ago|||
It isn't only companies, it is a mass social movement. Anyone with basic coding experience can download some basic learning apparatus and start feeding it material. The latest LLMs make it extremely easy to compose code that scrapes internet sites, so only the most minimal skills are required. Because everything is "AI" now aspiring young people are encouraged to do this in order to gain experience so they can get jobs and a careers in the new AI driven economy.
devsda 22 hours ago||
May be the teams developing AI crawlers are dogfooding & are using the AI itself(and its small context) to keep track of the sites that are already scraped. /s
everfrustrated 15 hours ago||
I think what gets lost in this is that we should expect a lot more traffic from AI if simply for the reason that if I ask AI to answer my question it will do a lot more work and fetch from a lot of websites in generating a reply to me. And yes searching over git repos will absolutely be part of that.

This is all "legitimate" traffic in that it isn't about crawling the internet but in service of a real human.

Put another way, search is moving from a model of crawl the internet and query on cached data to being able to query on live data.

ethin 15 hours ago||
I agree and I think that everyone agreeing or disagreeing with you (and sysadmins everywhere) would be perfectly fine with these AI crawlers (well, mostly...) if these corporations wrote them properly, followed best practices and standards, and didn't effectively DDoS servers or pretend to be what they aren't. Because that is, ultimately, what these AI companies are: very effective, for-sale, legal DDoSers. But they are not written properly, do not follow best practices and standards, and DDoS everything you aim them at, and even go as far as pretending that they're things they aren't, hide behind residential IP addresses (which I'm pretty sure could potentially be illegal because, you know, that risks getting people who have no idea what AI even is in trouble), etc. I don't think AI will replace search now just because so much of the world is blocked from them now, and that is only to increase I'm sure. And, honestly, I doubt there is anything these AI companies could do to make sysadmins actually trust them again anymore.
hombre_fatal 15 hours ago||
In some ways that's true.

But when it comes to git repos, an LLM agent like claude code can just clone them for local crawling which is far better than crawling remotely, and it's the "Right Way" for various reasons.

Frankly I suspect AI agents will push search in the opposite direction from your comment and move us to distributed cache workflows. These tools just hit the origin because it's the easy solution of today, not because the data needs to be up to date to the millisecond.

Imagine a system where all those Fetch(url) invocations interact with a local LRU cache. That'd be really nice, and I think that's where we'd want to go, especially once more and more origin servers try to block automated traffic.

petee 20 hours ago||
You can also add a honeypot urls to your robots.txt to trap bots that are using it as an index
s_ting765 19 hours ago||
I use the same exact trick from the source the article mentions.

I call it `temu` anubis. https://github.com/rhee876527/expert-octo-robot/blob/f28e48f...

Jokes aside, the whole web seems to be trending towards some kind of wall (pay, login, app etc.) and this ultimately sucks for the open internet.

BLKNSLVR 19 hours ago|
You missed the obvious portmanteau:

Temubis

userbinator 23 hours ago|
Unfortunately this means, my website could only be seen if you enable javascript in your browser.

Or have a web-proxy that matches on the pattern and extracts the cookie automatically. ;-)

More comments...