Top
Best
New

Posted by LucidLynx 11 hours ago

Miasma: A tool to trap AI web scrapers in an endless poison pit(github.com)
234 points | 183 commentspage 2
eliottre 6 hours ago|
The data poisoning angle is interesting. Models trained on scraped web data inherit whatever biases, errors, and manipulation exist in that data. If bad actors can inject corrupted data at scale, it creates a malign incentive structure where model training becomes adversarial. The real solution is probably better data provenance -- models trained on licensed, curated datasets will eventually outcompete those trained on the open web.
bluepeter 5 hours ago||
A related technique used to work so well for search engine spiders. I had some software i wrote called 'search engine cloaker'... this was back in the early 2000s... one of the first if not the first to do the shadowy "cloaking" stuff! We'd spin dummy content from lists of keywords and it was just piles and piles. We made it a bit smarter using Markov chains to make the sentences somewhat sensible. We'd auto-interlink and get 1000s of links. It eventually stopped working... but it took a long while for that to happen. We licensed the software to others. I rationalized it because I felt, hey, we have to write crappy copy for this stupid "SEO" thing, so let's just automate that and we'll give the spiders what they seem to want.
ctoth 4 hours ago|
You didn't 'give the spiders what they seem to want.' You exploited a naive ranking algorithm to inject garbage into search results that real people were trying to use. That you rationalized it at the time is human. That you're still rationalizing it decades later is something else.
ninjagoo 6 hours ago||
Isn't this a trope at this point? That AI companies are indiscriminately training on random websites?

Isn't it the case that AI models learn better and are more performant with carefully curated material, so companies do actually filter for quality input?

Isn't it also the case that the use of RLHF and other refinement techniques essentially 'cures' the models of bad input?

Isn't it also, potentially, the case that the ai-scrapers are mostly looking for content based on user queries, rather than as training data?

If the answers to the questions lean a particular way (yes to most), then isn't the solution rate-limiting incoming web-queries rather than (presumed) well-poisoning?

Is this a solution in search of a problem?

xantronix 5 hours ago|
You do raise an interesting point. The poison fountains would probably be more effective if their outputs more closely resembled whatever the most popular problem spaces are at any given point.
theandrewbailey 6 hours ago||
Or you can block bots with these (until they start using them) https://developer.mozilla.org/en-US/docs/Glossary/Fetch_meta...
hmokiguess 5 hours ago||
Could this lead to something like the Streisand effect? I imagine these bots work at a scale where humans in the loop only act when something deviates from the standard, so, if a bot flags something up with your website then you’re now in a list you previously weren’t. Now don’t ask me what they do with those lists, but I guess you will make the cut.
holysoles 5 hours ago||
If anyone is looking for a tool to actually send traffic to a tool like this, I wrote a Traefik plugin that can block or proxy requests based on useragent.

https://github.com/holysoles/bot-wrangler-traefik-plugin

dwa3592 4 hours ago||
Love it. Thanks for doing this work. Not sure why people are criticizing this. Also, insane amount of work has been done to improve scraping - which in my mind is just absolute bonkers and i didn't see people complaining about that.
jackdoe 2 hours ago||
rage against the dying of the light
meta-level 9 hours ago||
Isn't posting projects like this the most visible way to report a bug and let it have fixed as soon as possible?
suprfsat 9 hours ago|
"disobeys robots.txt" is more of a feature
More comments...