Top
Best
New

Posted by todsacerdoti 4/19/2025

The Web Is Broken – Botnet Part 2(jan.wildeboer.net)
411 points | 274 comments
aorth 4/20/2025|
In the last week I've had to deal with two large-scale influxes of traffic on one particular web server in our organization.

The first involved requests from 300,000 unique IPs in a span of a few hours. I analyzed them and found that ~250,000 were from Brazil. I'm used to using ASNs to block network ranges sending this kind of traffic, but in this case they were spread thinly over 6,000+ ASNs! I ended up blocking all of Brazil (sorry).

A few days later this same web server was on fire again. I performed the same analysis on IPs and found a similar number of unique addresses, but spread across Turkey, Russia, Argentina, Algeria and many more countries. What is going on?! Eventually I think I found a pattern to identify the requests, in that they were using ancient Chrome user agents. Chrome 40, 50, 60 and up to 90, all released 5 to 15 years ago. Then, just before I could implement a block based on these user agents, the traffic stopped.

In both cases the traffic from datacenter networks was limited because I already rate limit a few dozen of the larger ones.

Sysadmin life...

rollcat 4/20/2025||
Try Anubis: <https://anubis.techaro.lol>

It's a reverse proxy that presents a PoC challenge to every new visitor. It shifts the initial cost of accessing your server's resources back at the client. Assuming your uplink can handle 300k clients requesting a single 70kb web page, it should solve most of your problems.

For science, can you estimate your peak QPS?

marginalia_nu 4/20/2025|||
Anubis is a good choice because it whitelists legitimate and well behaved crawlers based on IP + user-agent. Cloudflare works as well in that regard but then you're MITM:ing all your visitors.
Imustaskforhelp 4/20/2025||||
Also, I was just watching brodie robertson video about how United Nations has this random search page of unesco which actually has anubis.

Crazy how I remember the HN post where anubis's blog post was first made. Though, I always thought it was a bit funny with anime and it was made by frustration of (I think AWS? AI scrapers who won't follow general rules and it was constantly giving requests to his git server and it actually made his git server down I guess??) I didn't expect it to blow up to ... UN.

xena 4/20/2025||
Her*

It was frustration at AWS' Alexa team and their abuse of the commons. Amusingly if they had replied to my email before I wrote my shitpost of an implementation this all could have turned out vastly differently.

Imustaskforhelp 4/21/2025||
Oh I am so so sorry I didn't see your gender and assumed it to be a (he). { really sorry about that once again}

Also didn't expect you to respond to my comment xD

I went through the slow realization of while reading this comment that you are the creator of anubis and I had such a smile when I realized that you commented to me.

Also, this project is really nice, but I actually want to ask, I haven't read the docs of anubis but could it be that the proof of work isn't wasted / it can be used for something (I know I might get downvoted because I am going to mention cryptocurrency, but nano currency has a proof of work required for each transaction, so if anubis actually does the proof of work as by nano standards, then theoretically that proof of work could atleast be some useful)

Looking forward to your comment!

akaij 4/22/2025||
Useful as a for-profit cryptocurrency? I think zero chance.

The only way I see anything like that incorporated is a folding@home kind of thing that could help humanity as a whole.

Of course, if someone makes it work like you suggested, and it catches on, I will personally haunt your dreams forever. Don't give them any ideas.

martin82 4/21/2025|||
This looks very cool, but isn't it just a matter of months until all scrapers get updated and can easily beat this challenge and are able to compute modern JS stuff?
nodogoto 4/20/2025|||
My company's site has also been getting hammered by Brazilian IPs. They're focused on a single filterable table of fewer than 100 rows, querying it with various filter combinations every second of every minute of every day.
luckylion 4/20/2025||
I've seen a few attacks where the operators placed malicious code on high-traffic sites (e.g. some government thing, larger newspapers), and then just let browsers load your site as an img. Did you see images, css, js being loaded from these IPs? If they were expecting images, they wouldn't parse the HTML and not load other resources.

It's a pretty effective attack because you get large numbers of individual browsers to contribute. Hosters don't care, so unless the site owners are technical enough, they can stay online quite a bit.

If they work with Referrer Policy, they should be able to mask themselves fairly well - the ones I saw back then did not.

ninkendo 4/20/2025||
I seem to remember a thing china did 10 years back where they injected JavaScript into every web request that went through their Great Firewall to target GitHub… I think it’s known as the “Great Cannon” because they can basically make every Chinese internet user’s browser hit your website in a DoS attack.

Digging it up: https://www.washingtonpost.com/news/the-switch/wp/2015/04/10...

luckylion 4/21/2025||
Wow, that had passed me by completely, thanks for sharing!

Very similar indeed. The attacks I witnessed where easy to block once you identified the patterns (referrer was visible and they used predictable ?_=... query parameters to try and bypass caches), but very effective otherwise.

I suppose in the event of a hot war, the Internet will be cut quickly to defend against things like the "Great Cannon".

hubraumhugo 4/20/2025||
We all agree that AI crawlers are a big issue as they don't respect any established best practices, but we rarely talk about the path forward. Scraping has been around for as long as the internet, and it was mostly fine. There are many very legitimate use cases for browser automation and data extraction (I work in this space).

So what are potential solutions? We're somehow still stuck with CAPTCHAS, a 25 years old concept that wastes millions of human hours and billions in infra costs [0].

How can enable beneficial automation while protecting against abusive AI crawlers?

[0] https://arxiv.org/abs/2311.10911

marginalia_nu 4/20/2025||
Proof-of-work works in terms of preventing large-scale automation.

As for letting well behaved crawlers in, I've had an idea for something like DKIM for crawlers. Should be possible to set up a fairly cheap cryptographic solution that enables crawlers a persistent identity that can't be forged.

Basically put a header containing first a string including today's date, the crawler's IP, and a domain name, then a cryptographic signature of the string. The domain has a TXT record with a public key for verifying the identity. It's cheap because you really only need to verify the string it once on the server side, and the crawler only needs to regenerate it once per day.

With that in place, crawlers can crawl with their reputation at stake. The big problem with these rogue scrapers are that they're basically impossible to identify or block, which means they don't have any incentives to behave well.

lesostep 4/28/2025||
> Proof-of-work works in terms of preventing large-scale automation.

It wouldn't work to prevent the type of behavior shown in a title story

CaptainFever 4/20/2025|||
My pet peeve is that using the term "AI crawler" for this conflates things unnecessarily. There's some people who are angry at it due to anti-AI bias and not wishing to share information, while there are others who are more concerned about it due to the large amount of bandwidth and server overloading.

Not to mention that it's unknown if these are actually from AI companies, or from people pretending to be AI companies. You can set anything as your user agent.

It's more appropriate to mention the specific issue one haves about the crawlers, like "they request things too quickly" or "they're overloading my server". Then from there, it is easier to come to a solution than just "I hate AI". For example, one would realize that things like Anubis have existed forever, they are just called DDoS protection, specifically those using proof-of-work schemes (e.g. https://github.com/RuiSiang/PoW-Shield).

This also shifts the discussion away from something that adds to the discrimination against scraping in general, and more towards what is actually the issue: overloading servers, or in other words, DDoS.

johnnyanmac 4/20/2025|||
It's become unbearable in the "AI era". So it's appropriate to blame AI for it, ib my eyes. Especially since so much defense is based aroind training LLMs.

It's just like how not all Ddoss's are actually hackers or bots. Sometimes a server just can't take the traffic of a large site flooding in. But the result is the same until something is investigated.

queenkjuul 4/20/2025|||
It's not a coincidence that this wasn't a major problem until everybody and their dog started trying to build the next great LLM.
udev4096 4/20/2025|||
Blame the "AI" companies for that. I am glad the small web is pushing hard against these scrapers, with the rise of Anubis as a starting point
lelanthran 4/20/2025||
> Blame the "AI" companies for that. I am glad the small web is pushing hard towards these scrapers, with the rise of Anubis as a starting point

Did you mean "against"?

udev4096 4/20/2025||
Corrected, thanks
jeroenhd 4/20/2025|||
The best solution I've seen is to hit everyone with a proof of work wall and whitelist the scrapers that are welcome (search engines and such).

Running SHA hash calculations for a second or so once every week is not bad for users, but with scrapers constantly starting new sessions they end up spending most of their time running useless Javascript, slowing the down significantly.

The most effective alternative to proof of work calculations seems to be remote attestation. The downside is that you're getting captchas if you're one of the 0.1% who disable secure boot and run Linux, but the vast majority of web users will live a captcha free life. This same mechanism could in theory also be used to authenticate welcome scrapers rather than relying on pure IP whitelists.

ognarb 4/27/2025||
The issue is that it would require normal user to also do the same, which is suboptimal from a privacy point of view.
mjaseem 4/20/2025|||
I wrote an article about a possible proof of personhood solution idea: https://mjaseem.github.io/tech/2025/04/12/proof-of-humanity.....

The broad idea is to use zero knowledge proofs with certification. It sort of flips the public key certification system and adds some privacy.

To get into place, the powers in charge need to sway.

0manrho 4/20/2025|||
> So what are potential solutions?

It won't fully solve the problem, but with the problem relatively identified, you must then ask why people are engaging in this behavior. Answer: money, for the most part. Therefore, follow the money and identify the financial incentives driving this behavior. This leads you pretty quickly to a solution most people would reject out-of-hand: turn off the financial incentive that is driving the enshittification of the web. Which is to say, kill the ad-economy.

Or at least better regulate it while also levying punitive damages that are significant enough to both disuade bad-actors and encourage entities to view data-breaches (or the potential therein) and "leakage[0]" as something that should actually be effectively secured against. Afterall, there are some upsides to the ad-economy that, without it, would present some hard challenges (eg, how many people are willing to pay for search? what happens to the vibrant sphere of creators of all stripes that are incentivized by the ad-economy? etc).

Personally, I can't imagine this would actually happen. Pushback from monied interests aside, most people have given up on the idea of data-privacy or personal-ownership of their data, if they ever even cared in the first place. So, in the absence of willing to do do something about the incentive for this maligned behavior, we're left with few good options.

0: https://news.ycombinator.com/item?id=43716704 (see comments on all the various ways people's data is being leaked/leached/tracked/etc)

caelinsutch 4/20/2025|||
CAPTCHAS are also quickly becoming irrelevant / not enough. Fingerprint based approaches seem to be the only realistic way forward in the cat / mouse game
CalRobert 4/20/2025|||
I hate this but I suspect a login-only deanonymised web (made simple with chrome and WEI!) is the future. Firefox users can go to hell.
spookie 4/21/2025|||
I'm still surprised by people everyday, after all these years. This is one of those times. Crazy how anyone would ever want a single point of identifying everything you do.
CalRobert 4/23/2025||
I don't want this - It's the exact opposite of what I want.
ArinaS 4/20/2025||||
We won't.
CalRobert 4/23/2025|||
To elaborate (if anyone sees this) I use Firefox on Linux. I don't LIKE this future! I just think it's where the web is headed.
eastbound 4/20/2025||
But people don’t interact with your website anymore; they as an AI. So the AI crawler is a real user.

I say we ask Google Analytics to count an AI crawler as a real view. Let’s see who’s most popular.

zahlman 4/19/2025||
> I am now of the opinion that every form of web-scraping should be considered abusive behaviour and web servers should block all of them. If you think your web-scraping is acceptable behaviour, you can thank these shady companies and the “AI” hype for moving you to the bad corner.

I imagine that e.g. Youtube would be happy to agree with this. Not that it would turn them against AI generally.

Centigonal 4/20/2025||
yeah, but you can't, that's the problem. Plenty of service operators would like to block every scraper that doesn't obey their robots.txt, but there's no good way to do that without blocking human traffic too (Anubis et al are okay, but they are half-measures).

On a separate note, I believe open web scraping has been a massive benefit to the internet on net, and almost entirely positive pre-2021. Web scraping & crawling enables search engines, services like Internet Archive, walled-garden-busting (like Invidious, yt-dlp, and Nitter), mashups (Spotube, IFTT, and Plaid would have been impossible to bootstrap without web scraping), and all kinds of interesting data science projects (e.g. scraping COVID-19 stats from local health departments to patch together a picture of viral spread for epidemiologists).

udev4096 4/20/2025|||
We should have a way to verify the user-agents of the valid and useful scrapers such as Internet Archive by having some kind of cryptographic signature of their user-agents and being able to validate it with any reverse proxy seems like a good start
nottorp 4/20/2025||
Self signed, I hope.

Or do you want a central authority that decides who can do new search engines?

udev4096 4/20/2025||
Using DANE is probably the best idea even though it's still not mainstream
lelanthran 4/20/2025|||
> Plenty of service operators would like to block every scraper that doesn't obey their robots.txt, but there's no good way to do that without blocking human traffic too (Anubis et al are okay, but they are half-measures)

Why is Anubis-type mitigations a half-measure?

Centigonal 4/20/2025||
Anubis, go-away, etc are great, don't get me wrong -- but what Anubis does is impose a cost on every query. The website operator is hoping that the compute will have a rate-limiting effect on scrapers while minimally impacting the user experience. It's almost like chemotherapy, in that you're poisoning everyone in the hope that the aggressive bad actors will be more severely affected than the less aggressive good actors. Even the Anubis readme calls it a nuclear option. In practice it appears to work pretty well, which is great!

It's a half-measure because:

1. You're slowing down scrapers, not blocking them. They will still scrape your site content in violation of robots.txt.

2. Scrapers with more compute than IP proxies will not be significantly bottlenecked by this.

3. This may lead to an arms race where AI companies respond by beefing up their scraping infrastructure, necessitating more difficult PoW challenges, and so on. The end result of this hypothetical would be a more inconvenient and inefficient internet for everyone, including human users.

To be clear: I think Anubis is a great tool for website operators, and one of the best self-hostable options available today. However, it's a workaround for the core problem that we can't reliably distinguish traffic from badly behaving AI scrapers from legitimate user traffic.

BlueTemplar 4/19/2025||
Yeah, also this means the death of archival efforts like the Internet Archive.
jeroenhd 4/19/2025||
Welcome scrapers (IA, maybe Google and Bing) can publish their IP addresses and get whitelisted. Websites that want to prevent being on the Internet Archive can pretty much just ask for their website to be excluded (even retroactively).

[Cloudflare](https://developers.cloudflare.com/cache/troubleshooting/alwa...) tags the internet archive as operating from 207.241.224.0/20 and 208.70.24.0/21 so disabling the bot-prevention framework on connections from there should be enough.

realusername 4/20/2025|||
That's basically asking to close the market in favor of the current actors.

New actors have the right to emerge.

jeroenhd 4/20/2025|||
They have the right to try to convince me to let them scrape me. Most of the time they're thinly veiled data traders. I haven't seen any new company try to scrape my stuff since maybe Kagi.

Kagi is welcome to scrape from their IP addresses. Other bots that behave are fine too (Huawei and various other Chinese bots don't and I've had to put an IP block on those).

0dayz 4/20/2025|||
No they don't.

There's no rule that you have to let anyone in who claims to be a web crawler.

realusername 4/20/2025|||
So who decides that you can be one? Right now it's Cloudflare, a litteral monopoly...

The truth is that I sympathize with the people trying to use mobile connections to bypass such a cartel.

What Cloudflare is doing now is worse than the web crawlers themselves and the legality of blocking crawlers with a monopoly is dubious at best.

areyourllySorry 4/20/2025||||
which is why they will stop claiming to be one.
chii 4/20/2025|||
so what happened to competition fostering a better outcome for all then?
areyourllySorry 4/20/2025||||
a large chunk of internet archive's snapshots are from archiveteam, where "warriors" bring their own ips (and they crawl respectfully!). save page now is important too, but you don't realise what is useful until you lose it.
trinsic2 4/20/2025|||
This sounds like it would be a good idea. Create a whitelist of IPs and block the rest.
Quarrel 4/20/2025||
FWIW, Trend Micro wrote up a decent piece on this space in 2023.

It is still a pretty good lay-of-the-land.

https://www.trendmicro.com/vinfo/us/security/news/vulnerabil...

aucisson_masque 4/19/2025||
It's interesting but so far there is no definitive proof it's happening.

People are jumping to conclusions a bit fast over here, yes technically it's possible but this kind of behavior would be relatively easy to spot because the app would have to make direct connections to the website it wants to scrap.

Your calculator app for instance connecting to CNN.com ...

iOS have app privacy report where one can check what connections are made by app, how often, last one, etc.

Android by Google doesn't have such a useful feature of course, but you can run third party firewall like pcapdroid, which I recommend highly.

Macos (little snitch).

Windows (fort firewall).

Not everyone run these app obviously, only the most nerdy like myself but we're also the kind of people who would report on app using our device to make, what is in fact, a zombie or bot network.

I'm not saying it's necessarily false but imo it remains a theory until proven otherwise.

jshier 4/20/2025||
> iOS have app privacy report where one can check what connections are made by app, how often, last one, etc.

Privacy reports do not include that information. They include broad areas of information the app claims to gather. There is zero connection between those claimed areas and what the app actually does unless app review notices something that doesn't match up. But none of that information is updated dynamically, and it has never actually included the domains the app connects to. You may be confusing it with the old domain declarations for less secure HTTP connections. Once the connections met the system standards you no longer needed to declare it.

zargon 4/20/2025||
I wasn't aware of this feature. But apparently it does include that information. I just enabled it and can see the domains that apps connect to. https://support.apple.com/en-us/102188
hoc 4/20/2025||
Pretty neat, actually. Thanks for looking uo that link.
Galanwe 4/20/2025|||
There is already a lot of proof. Just ask for a sales pitch from companies selling these data and they will gladly explain everything to you.

Go to a data conference like Neudata and you will see. You can have scraped data from user devices, real-time locations, credit card, Google analytics, etc.

throwaway519 4/20/2025|||
Given 5his is a thing even in browser plugins, and that so very few people analyse their firewalls, I'd not discount it at all. Much of the world's users hve no clue and app stores are notoriously bad at reacting even with publicsed malware e.g. 'free' VPNs in iOS Store.
abaymado 4/19/2025|||
> iOS have app privacy report where one can check what connections are made by app, how often, last one, etc.

How often is the average calculator app user checking there Privacy Report? My guess, not many!

gruez 4/20/2025||
All it takes is one person to find out and raise the alarm. The average user doesn't read the source code behind openssl or whatever either, that doesn't mean there's no gains in open sourcing it.
dewey 4/20/2025|||
The average user is also not reading these raised “alarms”. And if an app has a bad name, another one will show up with a different name on the same day.
aucisson_masque 4/20/2025||
You're on a tech forum, you must have seen one of the many post about app, either on Android or iPhone, that acts like spyware.

They happens from time to time, last one was not more than two week ago where it's been shown that many app were able to read the list of all other app installed on a Android and that Google refused to fix that.

Do you really believe that an app used to make your device part of a bot network wouldn't be posted over here ?

dewey 4/20/2025||
"You're on a tech forum", that's exactly the point. The "average user" is not on a tech forum though, the average user opens the app store of their platform, types "calculator" and installs the first one that's free.
nottorp 4/20/2025|||
The real solution is to add a permission for network access, with the default set to deny.
CharlesW 4/19/2025|||
Botnets as a Service are absolutely happening, but as you allude to, the scope of the abuse is very different on iOS than, say, Windows.
andelink 4/20/2025||
This is a hilariously optimistic, naive, disconnected from reality take. What sort of "proof" would be sufficient for you? TFA includes of course data from the authors own server logs^, but it also references real SDKs and business selling this exact product. You can view the pricing page yourself, right next to stats on how many IPs are available for you to exploit. What else do you need to see?

^ edit: my mistake, the server logs I mentioned were from the authors prior blog post on this topic, linked to at the top of TFA: https://jan.wildeboer.net/2025/02/Blocking-Stealthy-Botnets/

jeroenhd 4/19/2025||
> So there is a (IMHO) shady market out there that gives app developers on iOS, Android, MacOS and Windows money for including a library into their apps that sells users network bandwidth

AKA "why do Cloudflare and Google make me fill out these CAPTCHAs all day"

I don't know why Play Protect/MS Defender/whatever Apple has for antivirus don't classify apps that embed such malware as such. It's ridiculous that this is allowed to go on when detection is so easy. I don't know a more obvious example of a trojan than an SDK library making a user's device part of a botnet.

dx4100 4/20/2025||
Cloudflare and Google use CAPTCHAs to sell web scrapers? I don't get your point. I was under the impression the data is used to train models.
aloha2436 4/20/2025|||
The implication is that the users that are being constantly presented with CAPTCHAs are experiencing that because they are unwittingly proxying scrapers through their devices via malicious apps they've installed.
pentae 4/20/2025||
.. or that other people on their network/Shared public IP have installed
evgpbfhnr 4/20/2025||
or just that they don't run windows/mac OS with chome like everyone else and it's "suspicious". I get cloudflare capchas all the time with firefox on linux... (and I'm pretty sure there's no such app in my home network!)
Doxin 4/27/2025||
FWIW I run firefox on linux too, and I don't have any trouble with cloudflare captchas. I get them every now and then but definitely not all the time.
jeroenhd 4/20/2025||||
When a random device on your network gets infected with crap like this, your network becomes a bot egress point, and anti bot networks respond appropriately. Cloudflare, Akamai, even Google will start showing CAPTCHAs for every website they protect when your network starts hitting random servers with scrapers or DDoS attacks.

This is even worse with CG-NAT if you don't have IPv6 to solve the CG-NAT problem.

I don't think the data they collect is used to train anything these days. Cloudflare is using AI generated images for CAPTCHAs and Google's actual CAPTCHAs are easier for bots than humans at this point (it's the passive monitoring that makes it still work a little bit).

cuu508 4/20/2025|||
Trojans in your mobile apps ruin your IP's reputation which comes back to you in the form of frequent, annoying CAPTCHAs.
areyourllySorry 4/20/2025|||
it's not technically malware, you agreed to it when you accepted the terms of service :^)
L-four 4/20/2025||
It's malware it does something malicious.
Liftyee 4/19/2025||
I don't know if I should be surprised about what's described in this article, given the current state of the world. Certainly I didn't know about it before, and I agree with the article's conclusion.

Personally, I think the "network sharing" software bundled with apps should fall into the category of potentially unwanted applications along with adware and spyware. All of the above "tag along" with something the user DID want to install, and quietly misuse the user's resources. Proxies like this definitely have an impact for metered/slow connections - I'm tempted to start Wireshark'ing my devices now to look for suspicious activity.

There should be a public repository of apps known to have these shady behaviours. Having done some light web scraping for archival/automation before, it's a pity that it'll become collateral damage in the anti-AI-botfarm fight.

akoboldfrying 4/20/2025||
I agree, but the harm done to the users is only one part of the total harm. I think it's quite plausible that many users wouldn't mind some small amount of their bandwidth being used, if it meant being able to use a handy browser extension that they would otherwise have to pay actual dollars for -- but the harm done to those running the servers remains.
zzo38computer 4/20/2025|||
I agree, this should be called spyware, and malware. There are many other kind of software that also should, but netcat and ncat (probably) aren't malware.
karmanGO 4/19/2025||
Has anyone tried to compile a list of software that uses these libraries? It would be great to know what apps to avoid
mzajc 4/19/2025||
In the case of Android, εxodus has one[1], though I couldn't find the malware library listed in TFA. Aurora Store[2], a FOSS Google Play Store client, also integrates it.

[1] https://reports.exodus-privacy.eu.org/en/trackers/ [2] https://f-droid.org/packages/com.aurora.store/

takluyver 4/19/2025||
That seems to be looking at tracking and data collection libraries, though, for things like advertising and crash reporting. I don't see any mention of the kind of 'network sharing' libraries that this article is about. Have I missed it?
arewethereyeta 4/19/2025|||
No but here's the thing. Being in the industry for many years I know they are required to mention it in the TOS when using the SDKs. A crawler pulling app TOSs and parsing them could be a thing. List or not, it won't be too useful outside this tech community.
lelanthran 4/20/2025|||
> Has anyone tried to compile a list of software that uses these libraries? It would be great to know what apps to avoid

I wouldn't mind reading a comprehensive report on SOTA with regard to bot-blocking.

Sure, there's Anubis (although someone elsethread called it a half-measure, and I'd like to know why), there's captcha's, there's relying on a monopoly (cloudflare, etc) who probably also wants to run their own bots at some point, but what else is there?

il-b 4/20/2025|||
A good portion of free VPN apps sell their traffic. This was the thing even before the AI bot explosion.
api 4/19/2025||
This is nasty in other ways too. What happens when someone uses these B2P residential proxies to commit crimes that get traced back to you?

Anything incorporating anything like this is malware.

reconnecting 4/19/2025|
Many years ago cybercriminals used to hack computers to use them as residential proxies, now they purchase them online as a service.

In most cases they are used for conducting real financial crimes, but the police investigators are also aware that there is a very low chance that sophisticated fraud is committed directly from a residential IP address.

kastden 4/19/2025|
Are there any lists with known c&c servers for these services that can be added to Pihole/etc?
udev4096 4/20/2025|
You can use one of the list from here: https://github.com/hagezi/dns-blocklists
More comments...