Posted by todsacerdoti 1 day ago
I would really like to easily serve some markov chain non-sense to Ai bots.
Feel free to test this with any classifier or cheapo LLM.
I wouldn't be surprised if all this AI stuff was just a global conspiracy to get everyone to turn on JS.
Our open-source system can block IP addresses based on rules triggered by specific behavior.
Can you elaborate on what exact type of crawlers you would like to block? Like, a leaky bucket of a certain number of requests per minute?
Bad crawlers have been there since the very beginning. Some of them looking for known vulnerabilities, some scraping content for third-party services. Most of them have spoofed UAs to pretend to be legitimate bots.
This is approximately 30–50% of traffic on any website.
The simplest approach is to count the UA as risky or flag multiple 404 errors or HEAD requests, and block on that. Those are rules we already have out of the box.
It's open source, there's no pain in writing specific rules for rate limiting, thus my question.
Plus, we have developed a dashboard for manually choosing UA blocks based on name, but we're still not sure if this is something that would be really helpful for website operators.
Depends on the goal.
Author wants his instance not to get killed. Request rate limiting may achieve that easily in a way transparent to normal users.
It's trivial to spoof UAs unfortunately.
Problem is, bots can easily can resort to resi proxies, at which point you'll end up blocking legitimate traffic.
It's easy to assume "I received a lot of requests, therefore the problem is too many requests" but you can successfully handle many requests.
This is a clever way of doing a minimally invasive botwall though - I like it.
That sounds like a bug.
There is a point where your web server becomes fast enough that the scraping problem becomes irrelevant. Especially at the scale of a self-hosted forge with a constrained audience. I find this to be a much easier path.
I wish we could find a way to not conflate the intellectual property concerns with the technological performance concerns. It seems like this is essential to keeping the AI scraping drama going in many ways. We can definitely make the self hosted git forge so fast that anything short of ~a federal crime would have no meaningful effect.
It isn't just the volume of requests, but also bandwidth. There have been cases where scraping represents >80% of a forge's bandwidth usage. I wouldn't want that to happen to the one I host at home.
The market price for bandwidth in a central location (USA or Europe) is around $1-2 per TB and less if you buy in bulk. I think it's somewhat cheaper in Europe than in the USA due to vastly stronger competition. Hetzner includes 20TB outgoing with every Europe VPS plan, and 1€/TB +VAT overage. Most providers aren't quite so generous but still not that bad. How much are you actually spending?
Obviously such a thing will never happen, because the web and culture went in a different direction. But if it were a mainstream thing, you'd get easy to consume archives (also for regular archival and data hoarding) and the "live" versions of sites wouldn't have their logs be bogged down by stupid spam.
Or if PoW was a proper web standard with no JS, then ppl who want to tell AI and other crawlers to fuck off, they could at least make it uneconomical to crawl their stuff en masse. In my view, proof of work that would work through headers in the current day world should be as ubiquitous as TLS.
For a single user or a small team this could be enough.
I think the idea that you can block bots and allow humans is fallacious.
We should focus on a specific behaviour that causes problems (like making a bajillion requests one for each commit, instead of cloning the repo). To fix this we should block clients that work in such ways. If these bots learn to request at a reasonable pace why cares if they are bots, humans, bots under a control of an individual human, bots owned by a huge company scraping for training data? Once you make your code (or anything else) public, then trying to limit access to only a certain class of consumers is a waste of effort.
Also, perhaps I'm biased, because I run a searXNG and Crawl4AI (and few ancillaries like jina rerank etc) in my homelab so I can tell my AI to perform live internet searches as well as it can get any website. For code it has a way to clone stuff, but for things like issues, discussions, PRs it goes mostly to GitHub.
I like that my AI can browse almost like me. I think this is the future way to consume a lot of the web (except sites like this one that are an actual pleasure to use).
The models sometimes hit sites they can't fetch. For this I use Firecrawl. I use MCP proxy that lets me rewrite the tool descriptions so my models get access to both my local Crawl4ai and hosted (and rather expensive)firecrawl, but they are told to use Firecrawl as last resort.
The more people use these kinds of solutions the more incentive there will be for sites not to block users that use automation. Of course they will have to rely on alternative monetisation methods, but I think eventually these stupid capchas will disappear and reasonable rate limiting will prevail.
I think I see many prompt injections in your future. Like captchas with a special bypass solution just for AIs that leads to special content.
Since I do not make this node public accessable, so no worry for AI web crawlers:)