Posted by dvrp 1/14/2026
I suspect having a few different teams competing (for funding) to provide mirrors would rapidly reduce the hardware cost too.
The density + power dissipation numbers quoted are extremely poor compared to enterprise storage. Hardware costs for the enterprise systems are also well below AWS (even assuming a short 5 year depreciation cycle on the enterprise boxes). Neither this article nor the vendors publish enough pricing information to do a thorough total cost of ownership analysis, but I can imagine someone the size of IA would not be paying normal margins to their vendors.
https://help.archive.org/help/archive-bittorrents/
https://github.com/jjjake/internetarchive
https://archive.org/services/docs/api/internetarchive/cli.ht...
u/stavros wrote a design doc for a system (codename "Elephant") that would scale this up: https://news.ycombinator.com/item?id=45559219
(no affiliation, I am just a rando; if you are a library, museum, or similar institution, ask IA to drop some racks at your colo for replication, and as always, don't forget to donate to IA when able to and be kind to their infrastructure)
https://www.reddit.com/r/torrents/comments/vc0v08/question_a...
The solution is to use one of the several IA downloader script on GitHub, which download content via the collection's file list. I don't like directly downloading since I know that is most cost to IA, but torrents really are an option for some collections.
Turns out, there are a lot of 500BG-2TB collections for ROMs/ISOs for video game consoles through the 7th and 8th generation, available on the IA...
It sounds like they put this mechanism into place that stops regenerating large torrents incrementally when it caused massive slowdowns for them, and haven't finished building something to automatically fix it, but will go fix individual ones on demand for now.
[1] - https://www.reddit.com/r/theinternetarchive/comments/1ij8go9...
(like PHP code except it is binary data--it could be done on the fly)
There are lessons to be learned in that. For example, for that population, bandwidth efficiency and information leakage control invite solutions that are suboptimal for an organization that would build market share on licensing deals and growth maximization.
Without an overriding commercial growth directive you also align development incentives differently.
Users upload their encrypted data to miners, along with a negotiated fee for a duration of storage, say 90d. They take specific hashes of the complete data, and some randomized sub hashes, of internal chunks. Periodically an agent requests these chunks, hashes and rewards a fraction of the payment of the hash is correct.
That's a basic sketch, more details would have to be settled. But "miners" would be free to delete data if payment was no longer available on a chain. Or additionally, they could be paid by downloaders instead of uploaders for hoarding more obscure chunks that aren't widely available.
Not everyone who watches hentai is a perv
Anyhow, Tracy would put a gallon sized ziplock bag into her purse, and at the restaurant shovel half a dozen plates worth of food into it. Then she'd work the afternoon eating out of her purse like it's a bowl, just sitting there on the desk.
That way they would provide some more value back to the community as a mirror?
[1] It looks like this might exist at some level, e.g. https://github.com/hartator/wayback-machine-downloader, but I've been trying to use this for a couple of weeks and every day I try I get a HTTP 5xx error or "connection refused."
I just tried waybackpy and I'm getting errors with it too when I try to reproduce their basic demo operation:
>>> from waybackpy import WaybackMachineSaveAPI
>>> url = "https://nuclearweaponarchive.org"
>>> user_agent = "Mozilla/5.0 (Windows NT 5.1; rv:40.0) Gecko/20100101 Firefox/40.0"
>>> save_api = WaybackMachineSaveAPI(url, user_agent)
>>> save_api.save()
Traceback (most recent call last):
File "<python-input-4>", line 1, in <module>
save_api.save()
~~~~~~~~~~~~~^^
File "/Users/xxx/nuclearweapons-archive/venv/lib/python3.13/site-packages/waybackpy/save_api.py", line 210, in save
self.get_save_request_headers()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/Users/xxx/nuclearweapons-archive/venv/lib/python3.13/site-packages/waybackpy/save_api.py", line 99, in get_save_request_headers
raise TooManyRequestsError(
...<4 lines>...
)
waybackpy.exceptions.TooManyRequestsError: Can not save 'https://nuclearweaponarchive.org'. Save request refused by the server. Save Page Now limits saving 15 URLs per minutes. Try waiting for 5 minutes and then try again.$25 million a year is not remotely a lot for a non-profit doing any kind of work at scale. Wikimedia's budget is about seven times that. My local Goodwill chapter has an annual budget greater than that.
First, whether IA or any other large non-profit/charity. When you are in the double-digit/triple-digit multi-million bracket, you are no longer a non-profit/charity. You are in effect a business with a non-profit status.
Whether IA or any other large entity, when you get to that size, you don't benefit from the "oh they are a poor non-profit" mindset IMHO.
To be able to spend $25-30M a year, you clearly have to have a solid revenue stream both immediate and in the pipeline, that's Finances 101. Therefore you are in a privileged and enviable position that small non-profits can only dream of.
Second, I would be curious to know how much of that is of their own doing.
By that I mean, its sure cute to be located in the former Christian Science church on Funston Avenue in San Francisco’s Richmond District.
But they could most likely save a lot of money if they were located in a carrier-neutral facility.
For example, instead of paying for expensive external fiber lines (no doubt multiple, due to redundancy), they would have large amounts of capacity available through simple cross-connects.
Similar on energy. Are they benefiting from the same economies of scale that a carrier-neutral facility does ?
I am not saying the way they are doing it is wrong. I'm just genuinely curious to know what premium they are paying for doing it like they are.
And a lot of non-profits would be very very surprised to hear that once you cross the threshold of $9,999,999 costs, you are a business.
Nope.
The second half of my post, anyone who has been seriously involved with large carrier-neutral facilities will likely agree with me.
It is a fact that IA will be incurring a premium to DIY and as I quite clearly spelt out, I am NOT trying to say they are wrong, I am just genuinely curious as to what the premium they are paying is.
Regarding my comment about large non-profits. This is from personal experience. Once they get to a certain size, non-profits do switch to a business mentality. You might not like that fact, but it is a fact. They will more often than not have management boards who are "competitively remunerated". They will almost always actively manage their spare cash (of which they will have a large surplus) in investment portfolios. Things will be budgeted and cost-centered just like in larger businesses. They will have in-house legal teams or external teams on retainer to write up philanthropic contracts and aggressively chase after donations people leave them in wills. etc. etc. etc. etc.
You absolutely cannot place a large non-profit in the same mindset as your local community mom & pop non-profit that operates hand to mouth on a shoestring.
That is why I discourage people donating to large non-profits. You might feel good donating $100. But in reality its a sum that wouldn't even be a rounding-error on their financial reports. And in the majority of cases most of your donation is more likely to contribute to management expenses than the actual cause.
Large non-profits are more interested in large corporate philanthropic donations, preferably multi-year agreements. They have more than enough money for the immediate future (<=12–18 months), they want large chunks of future money in the pipeline and that's what the large philanthropic agreements give them.
Edit: Then again, I recently heard a podcast that talked about the relatively good at-rest stability of SATA hard disk drives stored outdoors. >smile<
Cache in this case was the hard drives. If I recall correctly, we were using SAM-FS, which worked fairly well for the purpose even though it was slow as dirt —- we could effectively mount the tape drive on Solaris servers, and access the file system transparently.
Things have gotten better. I’m not sure if there were better affordable options in the late 1990s, though. I went from Alexa/IA to AltaVista, which solved the problem of storing web crawl data by being owned by DEC and installing dozens of refrigerator sized Alpha servers. Not an option open to Alexa/IA.
unless tape, and the infrastructure to support it, is dramatically cheaper than disk,
This turns out to be the case, with the cost difference growing as the archive size scales. Once you hit petascale, it's not even close. However, most large-scale tape deployments also have disk involved, so it's usually not one or the other.I guess slotting disks into a storage shelf is similar to loading a tape changer robot. I can't imagine the backplane slots on a disk array being rated at a significant lifetime number of insertions / removals.
You also don't want your true backups online at all - that's the whole point.
Much more recently, I worked at a medium-large SaaS company but if you listened to my coworkers you'd think we were Google (there is a point where optimism starts being delusion, and a couple of my coworkers were past it.)
Then one day I found the telemetry pages for Wikipedia. I am hoping some of those charts were per hour not per second, otherwise they are dealing with mind numbing amounts of traffic.
It just reads like a clunky low quality article
https://www.nytimes.com/2025/12/03/magazine/chatbot-writing-...
That said, their use may raise suspicion of AI, but they are _not_ proof of AI. I don't want to live in a world where people with large vocabularies are not taken seriously. Such an anti-intellectual stance is extremely dangerous.
It has nothing to do with "large vocabularies". I know who the people with large vocabularies were that originally caused the delving thing too, and they weren't American. (Mostly they were Nigerian.) I'm confused what you think specific kinds of metaphors involving sounds have to do with large vocabularies though.
> I've seen it (and metaphors using the other words you noted) used in fiction for my entire life
And the point is that this article isn't fiction. Or not supposed to be anyway.
https://www.nytimes.com/2025/12/12/us/high-school-english-te...
Somewhat contradictory, I don't think you can ignore fiction when discussing technical writing, since technical writing (especially online) has become far more casual (and influenced by conversation, pop culture, and yes, even fiction) than it ever was before. So while as I noted above, younger people are reading less these days, people are also less strict about how formal technical writing needs to be, so they may very well include words and expressions not commonly seen in that style of writing in the past.
I'm not arguing that these things can't be indicators of AI generation. I'm just arguing that they can't be proof of AI generation. And that argument only gets stronger as time goes on an more people are (sadly) influenced by things AI have generated.
"Here, amidst the repurposed neoclassical columns and wooden pews of a building constructed to worship a different kind of permanence, lies the physical manifestation of the "virtual" world. We tend to think of the internet as an ethereal cloud, a place without geography or mass. But in this building, the internet has weight. It has heat. It requires electricity, maintenance, and a constant battle against the second law of thermodynamics. As of late 2025, this machine—collectively known as the Wayback Machine—has archived over one trillion web pages.1 It holds 99 petabytes of unique data, a number that expands to over 212 petabytes when accounting for backups and redundancy.3"
can you help my small brain by pointing out where in this paragraph they talk about deduplication?