Top
Best
New

Posted by LucidLynx 9 hours ago

Miasma: A tool to trap AI web scrapers in an endless poison pit(github.com)
234 points | 183 comments
bobosola 4 hours ago|
I dunno... it feels like the same approach as those people who tell you gleeful stories of how they kept a phone spammer on a call for 45 minutes: "That'll teach 'em, ha ha!" Do these types of techniques really work? I’m not convinced.

Also, inserting hidden or misleading links is specifically a no-no for Google Search [0], who have this to say: We detect policy-violating practices both through automated systems and, as needed, human review that can result in a manual action. Sites that violate our policies may rank lower in results or not appear in results at all.

So you may well end up doing more damage to your own site than to the bots by using dodgy links in this manner.

[0]https://developers.google.com/search/docs/essentials/spam-po...

trinsic2 4 hours ago||
>I dunno... it feels like the same approach as those people who tell you gleeful stories of how they kept a phone spammer on a call for 45 minutes: "That'll teach 'em, ha ha!" Do these types of techniques really work? I’m not convinced

If you are automating it, I don't see why not. Kitboga, a you-tuber kept scam callers in AI call-center loops tying up there resources so they cant use them on unsuspecting victims.[0]

That's a guerilla tactic, similar in warfare, when you steal resources from an enemy, you get stronger and they get weaker, its pretty effective.

[0]: https://www.youtube.com/watch?v=ZDpo_o7dR8c

phplovesong 3 hours ago|||
Pretty easy. Get a paid number and have the phone scammers / marketers call that. I know a guy who made a decent side huzzle from this. They marketers slowly blocked his number tho, not sure if he still has this thing going on, as it was more a experiment.
yareally 1 hour ago||
Was he picking up the phone and telling them to call him back on the other number?
bdangubic 3 hours ago|||
more and more scammers are automating their side as well so soon the loop will be just bots talking to bots
ordu 1 hour ago|||
> it feels like the same approach as those people who tell you gleeful stories of how they kept a phone spammer on a call for 45 minutes: "That'll teach 'em, ha ha!" Do these types of techniques really work? I’m not convinced.

In 2000s there was some company in Russia selling English courses. It spammed so much, that people were really pissed off. To make long story short, the company disappeared from a public space when Golden Telecom joined the party of retaliatory "spam" calls and make computer to call the company using Golden Telecom modem pool.

So, yeah, you kinda can achieve something in this way, but to make sure you should lease a modem pool for that.

rogerrogerr 1 hour ago|||
> gleeful stories of how they kept a phone spammer on a call for 45 minutes: "That'll teach 'em, ha ha!" Do these types of techniques really work? I’m not convinced.

It’s one of the best time investments I’ve ever made. They just don’t call me anymore.

I think they have two lists: the “do not call” list, and the “unprofitable to call” list. You want to be on the latter list.

xyzal 4 hours ago|||
One would assume legit spiders obey robots.txt.
lolc 3 hours ago||
This, to me, is the strongest argument to offer these slop generators. It provides an incentive to follow the robots.txt.
bugfix 2 hours ago|||
I really don't get it. Wouldn't you be wasting a lot of resources feeding the bots like this?
chongli 4 hours ago|||
Also, inserting hidden or misleading links is specifically a no-no for Google Search [0]

Depending on your goals, this may be a pro or a con. I, personally, would like to see a return of "small web" human-centric communities. If there were tools that include anti-scraping, anti-Google (and other large search crawlers) as well as a small web search index for humans to find these sites, this idea becomes a real possibility.

maxrmk 2 hours ago||
It’s easy to opt out of being indexed by Google.
cdrini 2 hours ago||
Exactly. Identifying crawlers like Google, bing aren't the issue. They obey robots.txt, and can easily be blocked by user agent checks. Non-identifying crawlers, which provide humanlike user agents, and which are usually distributed so get around ip-based rate limits, are the main ones that are challenging to deal with.
iririririr 3 hours ago|||
yes it work.

phone scammers have a very high personel cost, hence why some resort for human traffic.

if everyone picked up the phone and wasted a few seconds, it would be enough to make their whole enterprise worthless. but since most people who would not fail shutdown right away, they have the best ROI of any industry. they don't even pay the call for first seconds.

throw10920 2 hours ago|||
> I’m not convinced.

Is this how low we've sunk - that even below taking a single personal anecdote and generalizing it to everything - now we're taking zero experience and dismissing things based on vibes?

I've seen lots of LLM-slop-lovers doing the same thing. Maybe it's a pattern.

phplovesong 3 hours ago||
Who TF cares about google? This is mostly for personal tech stuff (just the stuff AI steals for training). Id say its pretty welcome that it is not shown in google results.
tasuki 6 hours ago||
> If you have a public website, they are already stealing your work.

I have a public website, and web scrapers are stealing my work. I just stole this article, and you are stealing my comment. Thieves, thieves, and nothing but thieves!

margalabargala 4 hours ago||
The problem I have, is they hammer my site so hard they take it down.

The content is for everyone. They can have it. Just don't also take it away from everybody else.

ethmarks 3 hours ago|||
Unintentional denial-of-service attacks from AI scrapers are definitely a problem, I just don't know if "theft" is the right way to classify them. They shouldn't get lumped in with intellectual property concerns, which are a different matter. AI scrapers are a tragedy of the commons problem kind of like Kessler syndrome: a few bad actors can ruin low Earth orbit for everyone via space pollution, which is definitely a problem, but saying that they "stole" LEO from humanity doesn't feel like the right terminology. Maybe the problem with AI scrapers could be better described as "bandwidth pollution" or "network overfishing" or something.
oasisbob 6 minutes ago|||
Theft isn't far off, it seems closer to me than using the word for IP violations.

When a crawler aggressively crawls your site, they're permanently depriving you the use of those resources for their intended purpose. Arguably, it looks a lot like conversion.

margalabargala 3 hours ago||||
Yes I completely agree.
FeepingCreature 3 hours ago|||
you're totally right about not being theft, but we have a term. you used it yourself, "distributed denial of service". that's all it is. these crawlers should be kicked off the internet for abuse. people should contact the isp of origin.
ethmarks 3 hours ago||
Firstly, since this argument is about semantic pedantry anyways, it's just denial-of-service, not distributed denial-of-service. AI scraper requests come from centralized servers, not a botnet.

Secondly, denial-of-service implies intentionality and malice that I don't think is present from AI scrapers. They cause huge problems, but only as a negligent byproduct of other goals. I think that the tragedy of the commons framing is more accurate.

EDIT: my first point was arguably incorrect because some scrapers do use decentralized infrastructure and my second point was clearly incorrect because "denial-of-service" describes the effect, not the intention. I retract both points and apologize.

FeepingCreature 2 hours ago|||
Sufficiently advanced negligence is indistinguishable from malice. There is a point you no longer gain anything from treating them differently.
cdrini 2 hours ago|||
The first is incorrect, these scrapers are usually distributed across many IPs, in my experience. I usually refer to them as "disturbed, non-identifying crawlers (DNCs)" when I want to be maximally explicit. (The worst I've seen is some crawler/botnet making exactly one request per IP -_-)
aduwah 2 hours ago||
I think the second is incorrect too. DDoS is a DDoS no matter what the intent is.
pmlnr 3 hours ago|||
Been there recently. Rate limit on nginx and anti-syn flood on pf solved it.
spiderfarmer 20 minutes ago||
I'm being hit with 300 req/s 24/7 from hundreds of thousands of unique IP's from residential proxies. I can't rate limit any further without hurting the real users.
oasisbob 4 minutes ago||
Yeah, IP-based rate limits are nearly ineffective these days.
kseniamorph 1 hour ago|||
> nothing but thieves! cool band btw
coldpie 4 hours ago|||
I agree theft isn't a good analogy, but there is something similar going on. I put my words out into the world as a form of sharing. I enjoy reading things others write and share freely, so I write so others might enjoy the things I write. But now the things I write and share freely are being used to put money in the bank accounts of the worst people on the planet. They are using my work in a way I don't want it to be used. It makes me not want to share anymore.
gruez 4 hours ago|||
>but there is something similar going on [...]

No, what you're basically describing is "I shared something but then I didn't like how it ended up being used". If you put stuff out in public for anyone to use, then find out it's used in a way you don't like, it's your right to stop sharing, but it's not "similar" to stealing beyond "I hate stealing"

Hendrikto 4 hours ago||
> If you put stuff out in public for anyone to use, then find out it's used in a way you don't like

Nope. Copyright is a thing, licenses are a thing. Both are completely ignored by LLM companies, which was already proven in court, and for which they already had to pay billions in fines.

Just because something is publicly accessible, that does not mean everybody is entitled to abuse it for everything they see fit.

gruez 4 hours ago||
>Nope. Copyright is a thing, licenses are a thing. Both are completely ignored by LLM companies, which was already proven in court,

...the same courts that ruled that AI training is probably fair use? Fair use trumps whatever restrictions author puts on their "licenses". If you're an author and it turned out that your book was pirated by AI companies then fair enough, but "I put my words out into the world as a form of sharing" strongly implied that's not what was happening, eg. it was a blog on the open internet or something.

FromTheFirstIn 3 hours ago||
I never understand why anyone wants authors to not be able to enforce copyright and licensing laws for AI training. Unless you are Anthropic or OAI it seems like a wild stance to have. It’s good when people are rewarded for works that other people value. If trainers don’t value the work, they shouldn’t train on it. If they do, they should pay for it.
FeepingCreature 3 hours ago|||
My own view is, I thought we were all agreed that the idea that Microsoft can restrict Wine from even using ideas from Windows, such that people who have read the leaked Windows source cannot contribute to Wine, was a horrible abuse of the legal system that we only went along with under duress? Now when it's our data being used, or more cynically when there's money to be made, suddenly everyone is a copyright maximalist.

No. Reading something, learning from it, then writing something similar, is legal; and more importantly, it is moral. There is no violation here. Copyright holders already have plenty of power; they must not be given the power to restrict the output of your brain forever more for merely having read and learnt. Reading and learning is sacred. Just as importantly, it's the entire damn basis of our profession!

If you do not want people to read and learn from your content, do not put it on the web.

FromTheFirstIn 34 minutes ago|||
If you want people to read and learn from each other, you should incentivize people to make content worth reading and learning from. Making LLM training a viable loophole for copyright law means there won’t be incentives to produce such work.
FromTheFirstIn 30 minutes ago|||
Re-reading your comment, I think we’re both generally anti-corporate-fuckery. I view the current batch of copyright pearl clutching to be an argument about if VCs are allowed to steal books to make their chatbots worth talking to, and the Wine/MSoft debate about if it should be legal to engage in anticompetitive behavior by restrictive use of copyright. In both of these cases the root of the issue isn’t really the copyright as an abstract- it’s the bludgeoning of the person with less money by use of overwhelming legal costs to have a day in court.
gruez 3 hours ago|||
>I never understand why anyone wants authors to not be able to enforce copyright and licensing laws for AI training.

Fair use is part of "copyright and licensing laws".

grumbelbart 2 hours ago||
Would using an actors face and voice as training data be fair use?

What it the model then creates a virtual actor that is very close to the real actor?

gruez 2 hours ago||
>What it the model then creates a virtual actor that is very close to the real actor?

"Likeness" is a separate concept from copyrights

https://en.wikipedia.org/wiki/Personality_rights

hparadiz 30 minutes ago||
I wish I lived in the alternative timeline where open source folks didn't look a gift horse in the mouth and actually used these tools to copy left the shit out of software to the point where proprietary closed source software has no advantage.

But instead we've got people posting "honey pots" that an LLM will immediately detect and route around.

Lerc 30 minutes ago||||
It sounds like you wanted to believe you were sharing freely while sharing conditionally.
tasuki 4 hours ago||||
> But now the things I write and share freely are being used to put money in the bank accounts of the worst people on the planet.

I don't think that's the case. I'm not even arguing they aren't the worst people on the planet - might as well be. But all is see them doing is burning money all over the place.

FromTheFirstIn 4 hours ago||
They’re getting the money to burn, though
kmeisthax 4 hours ago|||
If you want a good analogy, try the enclosure of the commons in the British countryside. Communally managed grasslands were destroyed by noblemen with massive herds of cattle overgrazing the land, kickstarting a land grab that effectively forced people to enclose or be left behind themselves. Property is a virus that destroys all other forms of allocation.
spiderfarmer 6 hours ago||
If someone hands out cookies in the supermarket, are you allowed to grab everything and leave?
drfloyd51 5 hours ago|||
Odd thing about cookies… they disappear after one serving.

Websites are an endless stream of cookies.

The analogy doesn’t hold.

ghywertelling 5 hours ago|||
If copying content from harddrive to another is theft, then so is DNA copying itself.

Everything is a Remix culture. We should promote remix culture rather than hamper it.

Everything is a Remix (Original Series) https://youtu.be/nJPERZDfyWc

subscribed 2 hours ago||||
Fine.

Me and my 9 friends stand around the cookie-serving person blocking everyone else.

It's taking all the cookies over a period of time.

The analogy was good.

GeoAtreides 4 hours ago||||
how about this analogy: I created a most tasty cookie recipe. I give it out for free, and all copies have my name because I am vain person who likes to be known far and wide as the best baking chef ever. Is it ok to get the recipe, remove my name, and write in LLM-Codex as the creator? again, i'm ok with giving the recipe for free, i just want my name out there.
gruez 3 hours ago||
>Is it ok to get the recipe, remove my name, and write in LLM-Codex as the creator? again, i'm ok with giving the recipe for free, i just want my name out there.

From a legal perspective, it's a pretty clear "no". The instructions in recipes aren't copyrightable. The moral question is more ambiguous, but it's still pretty weak. Most recipes are uncredited, and it's unclear why someone can force everyone to attribute the recipe to them when all they realistically did was tweak the dish a bit. In the example above, I doubt you invented cookies.

GeoAtreides 2 hours ago||
i'm curious, do you honestly think the argument was about recipes and cookies? maybe it was an analogy? looking back up the comment tree, it does seem to be an analogy, not a discussion about ACTUAL cookies and ACTUAL recipes.
gruez 2 hours ago||
>maybe it was an analogy?

In that case it's a terrible analogy because if you can't get people to agree on the cookies case, what hope do you have to extend it to the case you're trying to apply the analogy to? It's like saying "You wouldn't pirate a movie, why would you pirate a blog post", because most people would pirate movies.

GeoAtreides 2 hours ago||
oh man.

my comment was about the very human need to be recognized for something created, made, or thought by a person. People are ok with writing blog posts, they're ok with writing software, and they're ok with give it all for free, but they want their name attached and their contribution recognized.

gruez 1 hour ago||
>my comment was about the very human need to be recognized for something created, made, or thought by a person.

And I specifically addressed that aspect:

>The moral question is more ambiguous, but it's still pretty weak. Most recipes are uncredited, and it's unclear why someone can force everyone to attribute the recipe to them when all they realistically did was tweak the dish a bit. In the example above, I doubt you invented cookies.

The cookies analogy was terrible because recipes are rarely credited, but even ignoring the terrible analogy the "recognition" argument still fails. If you wrote a blog post on how to set up kubernetes (or whatever), then it's fair enough that you get recognized for that specific blog post. If my friend asked me how to set up kubernetes, it wouldn't be cool for me to copy paste your blog post and send it over.

However similar to copyright, the recognition you deserve quickly drops off once it moves beyond that specific work. If I absorbed the knowledge from your blog post, then wrote another guide on setting up kubernetes, perhaps updated for my use case, it's unreasonable to require that you be credited. It might be nice, and often times people do, but it's also unreasonable if you wrote an angry letter demanding that you be credited. You weren't the inventor of kubernetes, and you probably got your knowledge of kubernetes from elsewhere (eg. the docs the creators made), so why should everyone have to credit you in perpetuity?

GeoAtreides 1 hour ago||
your ability to not address my argument main point is something to behold. can't tell if you're doing on purpose or not.

if humans read my blog posts and then things without credit that would be fine. i like human eyeballs and i like them on my content. that's exactly the purpose of the blog post (_in this particular example_), to get human eyeballs on the content.

gruez 1 hour ago||
>your ability to not address my argument main point is something to behold. can't tell if you're doing on purpose or not.

Or maybe you're just terrible at writing.

>if humans read my blog posts and then things without credit that would be fine.

I'm not sure how I (or anyone) was supposed to come away with this conclusion when you were writing stuff like:

"i'm ok with giving the recipe for free, i just want my name out there"

"the very human need to be recognized for something created"

"they want their name attached and their contribution recognized".

GeoAtreides 1 hour ago||
there is nothing contradictory in what i said, and if you weren't favoring a very literal interpretation of my argument you would agree.

but, in the spirit of critical reading education, what i meant is: human attention good, machine ingestion bad.

z3c0 5 hours ago||||
Digital information may be our first post-scarce resource. It's interesting, and sad, to see so many attempt to fit it within scarcity-based economic models.
Terretta 4 hours ago||
> digital information may be our first post-scarce resource

… browses memory and storage prices on NewEgg …

Hmm.

But the word digital is distracting us.

The word information is the important one. The question isn't where information goes. It's where information comes from.

Is new information post scarcity?

Can it ever be?

lou1306 3 hours ago||||
Bandwidth and compute constraints make websites all but an endless stream though.
spiderfarmer 15 minutes ago||
That's exactly it. It costs me real time and money to serve the 97% of fake traffic that just takes without giving me anything in return.
throwaway613746 5 hours ago|||
[dead]
bengale 5 hours ago||||
It’s interesting to see twists on the old anti-piracy arguments recycled for anti-ai.
gruez 4 hours ago||
Turns out many (most?) people on the internet were never anti-copyright in the first place. They were just anti-copyright (or at least, refused to challenge the anti-copyright people) because they wanted free movies and/or hated corporations.
subscribed 2 hours ago||
Many of these people live int he countries where downloading for own use is lawful, since they're paying copyright levy exactly to cover for that.

They don't have to hate the copyright.

falcor84 5 hours ago||||
That really depends, but the quick answer is that according to our human social contract, we'd just ask "how many can I take?". Until now, the only real tool to limit scrapers has been throttling, but I don't see any reason for there not to be a similar conversational social contract between machines.
volemo 5 hours ago||
Isn’t robots.txt such a “social contract between machines”? But AI scrapers couldn’t care less.
GaggiX 6 hours ago||||
I will copy the supermarket and paste it somewhere else.

I'm also going to download a car.

Bender 2 hours ago||||
If someone hands out cookies in the supermarket, are you allowed to grab everything and leave?

Depends on the trust level of your society. where the store resides.

The internet is a cesspool of vagrants, thieves, mentally unstable, people and software with no impulse control, pirates and that is just talking about corporations. It gets so much worse with individuals.

pbasista 5 hours ago|||
This is a dishonest analogy. In your example, there is only a limited amount of cookies available. While there is no practical limit on the amount of time a certain digital media can be viewed.

You are allowed to take one cookie. But you are allowed to view a public website multiple times if you so want.

spiderfarmer 4 hours ago|||
Multiple AI scrapers are downloading every page of my 6M page website as we speak. They don’t care about the fact that I have dedicated 20 years to building it, nor that I have to maintain multiple VPSes just to serve it to them.

If I can poison them and their families, I will.

joquarky 2 hours ago|||
> If I can poison them and their families, I will.

Don't post anything online that you don't want to be brought up in court later.

spiderfarmer 24 minutes ago||
Like the OP's solution it was about scrapers and the models they share their data with.
ImPostingOnHN 3 hours ago|||
Wow, how did you manually hand-write 6 million web pages? That is impressive. It would take me a while to even montonically count that high.
subscribed 2 hours ago||
You're trying to use a quite unfunny "sarcasm" to move the goalpost to the strawman (they never claimed they handcrafted these pages) and quickly gloss ove the fact it's 20 years of work so why not?
throwaway613746 5 hours ago||||
[dead]
hollow-moe 5 hours ago|||
There sure is a limit in the load that the server you're DDoSing can take or the will for people to post new worthy content in public. The supply is limited just not at the first degree. Let's make a small edit: Are you allowed to take all the cookies and then sell them with a small ribbon with your name on it ?
spiderfarmer 4 hours ago||
Their is no arguing with pirates. They’ll take what’s yours and forget about you while you tend to the ashes.
CrzyLngPwd 4 hours ago||
Way back in the day I had a software product, with a basic system to prevent unauthorised sharing, since there was a small charge for it.

Every time I released an update, and new crack would appear. For the next six months I worked on improving the anti-copying code until I stumbled across an article by a coder in the same boat as me.

He realised he was now playing a game with some other coders where he make the copyprotection better, but the cracker would then have fun cracking it. It was a game of whack-a-mole.

I removed the copy protection, as he did, and got back to my primary role of serving good software to my customers.

I feel like trying to prevent AI bots, or any bots, from crawling a public web service, is a similar game of whack-a-mole, but one where you may also end up damaging your service.

Cpoll 3 hours ago||
> the cracker would then have fun cracking it.

I wonder if you could've won by making the cracking boring. No new techniques, bare minimum changes to require compiling a new crack, and just enough to make it difficult to automate. I.e. turn the cracking into a job.

But in reality, there are other community-driven motivations to put out cracks.

gruez 3 hours ago||
>No new techniques, bare minimum changes to require compiling a new crack, and just enough to make it difficult to automate.

From a practical perspective you also have to have a steady stream of features for the newer versions to be worth cracking. Otherwise why use v1.09 when v1.01 works fine? Moreover spending less effort into improving the DRM is still playing at the cat and mouse game, albeit with less time investment. If you're making minimal changes, the cracker also has to spend minimal time updating the crack.

joquarky 1 hour ago||
So many problems could be solved by letting go.

Unfortunately social media and snowballing copyright maximalism has inflated egos to the point where more and more people think they need to control everything.

CrzyLngPwd 10 minutes ago||
If only I could go back in time 26 years and let myself know I was right to focus on my customers.
aldousd666 6 hours ago||
This is ultimately just going to give them training material for how to avoid this crap. They'll have to up their game to get good code. The arms race just took another step, and if you're spending money creating or hosting this kind of content, it's not going to make up for the money you're losing by your other content getting scraped. The bottom has always been threatening to fall out of the ads paid for eyeballs, And nobody could anticipate the trigger for the downfall. Looks like we found it.
aldousd666 5 hours ago||
To be clear, I mean AI is going to be the downfall of ad supported content. But let's face it. We have link farms and spam factories as a result of the ad supported content market. I think this is going to eventually do justice for users because it puts a premium on content quality that someone will want to pay a direct licensing fee to scrape for your AI bots as opposed to tricking somebody into clicking on a link and looking at an impression for something they won't buy.
johneth 5 hours ago|||
> This is ultimately just going to give them training material for how to avoid this crap.

> The arms race just took another step, and if you're spending money creating or hosting this kind of content, it's not going to make up for the money you're losing by your other content getting scraped.

So we should all just do nothing and accept the inevitable?

ninjagoo 4 hours ago||
> So we should all just do nothing and accept the inevitable?

I daresay rate-limiting will result in better outcomes than well-poisoning with hidden links that are against the policies of search engines.

Lots of potential for collateral damage, including your own websites' reputations and search visibility, with the well-poisoning approach.

xantronix 4 hours ago|||
The README.md specifically states how to allow for nice robots to proceed unhindered. The people behind these efforts, I would imagine, don't particularly care about their sites' reputations in the cases people use LLMs for search.
ddtaylor 3 hours ago|||
To be honest who cares about Google search anymore it's pretty useless these days.
ninjagoo 2 hours ago||
The small non-profit I volunteer with finds Google ads to be surprisingly effective, and much more cost-effective than FB for what they do, so there's at least some Google search usage in the demographic that they serve.
subscribed 1 hour ago|||
So, if at the end of the day instead of clicking EVERY single link in the repository they just check it out and parse locally...... I would consider it a win.
Apocryphon 5 hours ago||
Tech is just a series of arms races
Art9681 4 hours ago||
Can't we simple parse and remove any style="display: none;", aria-hidden="true", and tabindex="1" attributes before the text is processed and get around this trick? What am I missing?
hoistbypetard 3 hours ago||
If you do that and don't follow robots.txt, you are blocked. If you do that and follow robots.txt, fine. That's all we wanted you to do anyway. Just follow the instructions that well-behaved scrapers are meant to follow.
phplovesong 2 hours ago||
Just have the link visible, but css it so that its either small as hell, or just off screen. Google / bots will follow it, real peopple will never see it.
madeofpalk 7 hours ago||
Is there any evidence or hints that these actually work?

It seems pretty reasonable that any scraper would already have mitigations for things like this as a function of just being on the internet.

raincole 5 hours ago||
It might work against people just use their Mini Mac with OpenClaw to summarize news every morning, but it certainly won't work against Google.

More centralized web ftw.

hexage1814 5 hours ago|||
It also probably won't work if the person actually wants your content and is checking if the thing they scraped actually makes sense or it just noise. Like, none of these are new things. Site owners send junk/fake data to webscrapers since web scraping was invented.
otherme123 5 hours ago||||
In my experience, Google (among others) plays nice. Just put "disallow: *" in your robots.txt, and they won't bother you again.

My current problem is OpenAI, that scans massively ignoring every limit, 426, 444 and whatever you throw at them, and botnets from East Asia, using one IP per scrap, but thousands of IPs.

LaGrange 5 hours ago|||
> It might work against people just use their Mini Mac with OpenClaw to summarize news every morning,

Good enough for me.

> More centralized web ftw.

This ain't got anything to do with "centralized web," this kind of epistemological vandalism can't be shunned enough.

sd9 7 hours ago|||
Even it did work, I just can't bring myself to care enough. It doesn't feel like anything I could do on my site would make any material difference. I'm tired.
20k 7 hours ago||
I definitely get this. The thing that gives me hope is that you only need to poison a very small % of content to damage AI models pretty significantly. It helps combat the mass scraping, because a significant chunk of the data they get will be useless, and its very difficult to filter it by hand
lucasfin000 5 hours ago||
The asymmetry is what makes this very interesting. The cost to inject poison is basically zero for the site owner, but the cost to detect and filter it at scale is significant for the scraper. That math gets a lot worse for them as more sites adopt it. It doesn't solve the problem, but it changes the economics.
xyzal 4 hours ago|||
About two years ago, I made up reference to a nonexistent python library and put code "using" it in just 5 GitHub repos. Several months later the free ChatGPT picked it up. So IMO it works.
logicprog 4 hours ago||
Via websearch? Or training?
bediger4000 5 hours ago|||
The search engine crawlers are sophisticated enough, but Meta's are not. Neither is Anthropic's Claude crawler. Source: personal experience trying garbage generators on Yandex, Blexbot, Meta's and Anthropics crawlers.

I'm completely uncertain that the unsophisticated garbage I generated makes any difference, much less "poisons" the LLMs. A fellow can dream, can't he?

spiderfarmer 6 hours ago|||
There are hundreds of bots using residential proxies. That is not free. Make them pay.
m00dy 6 hours ago|||
it won't work, especially on gemini. Googlebot is very experienced when it comes to crawling. It might work for OpenAI and others maybe.
nubg 7 hours ago|||
What kind of migitations? How would you detect the poison fountain?
avereveard 7 hours ago|||
style="display: none;" aria-hidden="true" tabindex="1"

many scraper already know not to follow these, as it's how site used to "cheat" pagerank serving keyword soups

m00dy 6 hours ago|||
Google will give your website a penalty for doing this.
phplovesong 2 hours ago|||
You dont have to use this. You can have it visible bit hide it for humans with other easy tricks.
cuu508 2 hours ago||
Scrapers can work around those other easy tricks too.
GaggiX 6 hours ago|||
Because the internet is noisy and not up to date all recent LLMs are trained using Reinforcement Learning with Verifiable Rewards, if a model has learned the wrong signature of a function for example it would be apparent when executing the code.
phoronixrly 6 hours ago||
It does work, on two levels:

1. Simple, cheap, easy-to-detect bots will scrape the poison, and feed links to expensive-to-run browser-based bots that you can't detect in any other way.

2. Once you see a browser visit a bullshit link, you insta-ban it, as you can now see that it is a bot because it has been poisoned with the bullshit data.

My personal preference is using iocaine for this purpose though, in order to protect the entire server as opposed to a single site.

storus 38 minutes ago||
I am failing to see how this stops pre-training scrapping? It still looks like legit code, playing nicely with the desired pre-training distribution. Obviously nobody is going to use it for SFT/DPO/GRPO later.
kristopolous 4 hours ago||
I did a related approach:

A toll charging gateway for llm scrapers: a modification to robots.txt to add price sheets in the comment field like a menu.

This was for a hackathon by forking certbot. Cloudflare has an enterprise version of this but this one would be self hosted

I think it has legs but I think I need to get pushed and goaded otherwise I tend to lose interest ...

It was for the USDC company btw so that's why there's a crypto angle - this might be a valid use case!

I'm open to crypto not all being hustles and scams

Tell me what you think?

https://github.com/kristopolous/tollbot

ctoth 3 hours ago|
This is literally what HTTP 402 is for -- there's a whole buncha work going on ... but please, please, please don't let Cloudflare become another bloody gatekeeper. Please.
effnorwood 4 hours ago|
certainly don't allow anyone to access your content. perhaps shut the site down just to be safe.
aduwah 1 hour ago|
Accessing the shop by going through the wall with a tank is not the same as walking in the door. Hosting costs money. These botnets should be charged for the costs they incur
More comments...