Things that are not cozy:
1) There's no way to monitor your monthly spend per host/credit left on the account/etc. apart of logging into your account in a browser and manually keeping a spreadsheet. There's no web API to do it. You get an email warning when you have about 7 days of credit left. That's it.
2) Nothing is "a precious few megabytes" anymore. What seems like a negligible monthly spend at first can quickly grow up on you and soon you're spending highly non-trivial amounts. Which you might not notice due to 1) unless you are diligent in your accounting.
3) tarsnap restores are slow. Really really slow. A full restore can take days if you have non-trivial amounts of data (and make sure you have enough credit in your account to pay for that server-to-client bandwidth!) My understanding is that throughput is directly related to your latency to the AWS datacenter where tarsnap is hosted. Outside of north America you can be looking at nearly dial-up speeds even on a gigabit link.
Again, a problem that can surprise you at the most inconvenient time. Incremental backups in a daily cronjob tend to transfer very small amounts of data, so you won't notice the slowness until you try to do a full restore. And you generally don't test that very often because you pay for server-to-client transfers.
There are some workarounds for 3) and there's a FAQ about it, but look at the mailing list and you'll see that it's something that surprises people again and again.
Amazon has Pre-Pay in a semi-open beta.
CloudFront has 1TB/month free- knocking a large chunk of a restore's cost. (Note- you should have either encrypted your stuff yourself and/or S3 authorization/access control still works over CF)
At what seems to be <$2/mo per TB ($1/TB glacier Deep archive + 9cent/gb for metadata on S3 frequent access), no other solution comes close. The big issue is the lump cost of a restore. Which, is quickly worn down by being > $5/TiB/mo cheaper than anybody else.
Tarsnap, in contrast, has an explicit first-class ability to prevent a compromised client from damaging old backups.
It’s pretty simple to enable versioning and object lock on your S3 bucket, but it is another step if you’re using restic. Sure, if you just want all of that taken care of for you, you can use tarsnap, but you’re paying a 5x+ premium for it.
The other nice thing about restic is that since it’s just the client-side interface, it allows others to provide managed storage. Borgbase.com is a storage backend that is supported by Restic that supports append-only backups, and is cheaper than tarsnap.
https://restic.readthedocs.io/en/stable/030_preparing_a_new_...
I would like to see an explicit discussion of what permissions are needed for what operation. I would also like to see a clearly specified model in which backups can be created in a bucket with less than full permissions and, even after active attack by an agent with those same permissions, one can enumerate all valid backups in the bucket and be guaranteed to be able to correctly restore any backup as long as one can figure out which backup one wants to restore.
Instead there are random guides on medium.com describing a configuration that may or may not have the desired effect.
If you don’t understand S3 or don’t want to learn, then that’s fine, and you can pay the premium to tarsnap for simplifying it for you. But that’s your choice, not an issue with restic.
If you think differently, have you submitted a PR to restic’s docs to add the information you think should be there?
I think people are frequently trapped in some way of thinking (not sure exactly) that doesn't allow them to think of storage as anything other than Block based. They repeatedly try to reduce S3 to LBA's, or POSIX permissions (not even modern ACL type permissions), or some other comparison that falls apart quickly.
Best I've come up with is "an object is a burned CD-R." Even that falls apart though
For that matter, suppose an attacker modifies an object and replaces it with corrupt or malicious contents, and I detect it, and the previous version still exists. Can the restic client, as written, actually manage the process of restoring it? I do not want to need to patch the client as part of my recovery plan.
(Compare to Tarsnap. By all accounts, if you backup up, your data is there. But there are more than enough reports of people who are unable to usefully recover the data because the client is unbelievably slow. The restore tool needs to do what the user needs it to do in order for the backup to be genuinely useful.)
Tarsnap's deduplication works on the archive level, not on the particular files etc within the archive. Someone can set up a write-only Tarsnap key and trust the deduplication to work. A compromised machine with a write-only Tarsnap key can't delete Tarsnap archive blobs, it can only keep writing new archive blobs to try to bleed your account dry (which, ironically, the low sync rate helps protect against - not a defense for it, just a funny coincidence).
restic by contrast does do its dedupe at the file level, and what's more it seems to handle its own locks within its own files. Upon starting a backup, I observe restic first creates a lock and uploads it to my S3 compatible backend - my general purpose backups actually use Backblaze B2, not AWS S3 proper, caveat emptor. Then restic later attempts to delete that lock and syncs that change too to my S3 backend. That would require a restic key to have both write access and some kind of delete access to the S3 backend, at a minimum, which is not ideal for ransomware protection.
Many S3 backends including B2 have some kind of bucket-level object lock which prevent the modification/deletion of objects within that bucket for, say, their first 30 days. But this doesn't save us from ransomware either, because restic's own synced lock gets that 30 day protection too.
I can see why one would think you can't get around this without restic itself having something to say about it. Gemini tells me that S3 proper does let you set delete permissions at a granular enough level that you can tell it to only allow delete on locks/, with something like
# possible hallucination.
# someone good at s3 please verify
{
"Sid": "AllowDeleteLocksOnly",
"Effect": "Allow",
"Action": "s3:DeleteObject",
"Resource": "arn:aws:s3:::backup-bucket/locks/*"
}
But, I have not tested this myself, and this isn't necessarily true across S3 compatible providers. I don't know how to get this level of granularity in Backblaze, for example, and that's unfortunate because B2 is about a quarter the cost of S3 for hot storage.The cleanest solution would probably be to have some way for restic to handle locks locally, so that locks never need to hit the S3 backend in the first place. I imagine restic's developers are already aware of that, so this seems likely to be a much harder problem to solve than it first appears. Another option may be to use a dedicated, restic-aware provider like BorgBase. It sounds like they handle their own disks, so they probably already have some kind of workaround in place for this. Of course, as others have mentioned, you may not get as many nines out of BB as you would out of one of the more established general-purpose providers.
P.S.: Thank you both immensely for this debate, it's helped me advance the state of my own understanding a little further.
restic, and my own computers and storage, and the occasional rented device (VPS or similar, typically)
i find that the hassle of setting up my stuff is still preferable than having to worry about managing bills, subscriptions, and third parties just changing their policies
Restic + rclone is a very nice combo. Works really well.
My main backups are on rsync.net, though.
I'm carefully monitoring plakar in this space, wondering if anyone has experience with it and could share?
Looks like much for both Colin and us could be solved moving this away from AWS
Using something like restic or borgbackup+rclone is pretty much the same experience as tarsnap but a fraction of the price.
$3000 per TB-year is accurate to my knowledge, and yes, it is at least one, and probably two, orders of magnitude what you can get with more general purpose systems. Backblaze B2 is $72 per TB-year; AWS Glacier is $12 per TB-year I believe; purchasing two 20 TB Seagate drives for $300 apiece, mirroring them, and replacing them every 3 years gives you about $10 per TB-year (potentially - most of us don't have 20 TB to back up in our personal lives). Those are the best prices I've been able to find with some looking [2].
To me, when I was building out the digital resiliency audit, the pricing and model just seemed to tell me that tarsnap was for very specific kinds of critical data backups, and was not a great fit for general purpose stuff. Like a lot of other people here I also have a general-purpose restic based 3-2-1 backup going for the ~150 GB in /home I back up. [3] My use of tarsnap is partly a cheap hedge for the handful of bytes of data I genuinely cannot afford to lose against issues with restic, Backblaze B2, systemd, etc.
[1]: https://hiandrewquinn.github.io/tarsnap-calculator/
[2]: https://andrew-quinn.me/digital-resiliency-2025/#postscript-...
[3]: https://andrew-quinn.me/digital-resiliency-2025/#general-bac...
All the granular calculations (picodollars) on storage used plus time are fine. But tarsnap was always very expensive for larger amounts of data, especially data that cannot be well deduplicated.
> Tarsnap uses a prepaid model based on actual usage: Storage: 250 picodollars / byte-month of encoded data ($0.25 / GB-month) Bandwidth: 250 picodollars / byte of encoded data ($0.25 / GB)
I use Backblaze B2 myself for most of my general purpose backup needs. It's actually $6/month, I believe.
Tarsnap fills but one niche in my overall system. It's a very important niche for which I haven't found any other providers who do anything similar (keyfiles, prepaid, borderline anonymous etc), but it's not where I store the vast majority of my stuff.
One use case: I don't like the idea of having any accounts at all which I log into without the aid of a password manager. That creates a bootstrapping problem - how am I supposed to log into Google Drive to get my Google Drive password? A prepaid keyfile-based model is one particularly robust way of solving this. You stick your e.g. 100 kB password database in there, print out and shred the keyfile, stick the printout in a fireproof safe, and be virtually certain that whatever you put in Tarsnap has been untouched however many years you come back to it later. Print it on archival paper with some silica gel packets and it might survive for millennia in your weird subterranean vampire family castle.
"The business won't survive that long." I'm not so sure. Its ongoing costs appear minimal, and it generates eye watering amounts of float. $5 paid today is >$200 fifty years from now when compounded at 8% real interest. That very fact makes it much more likely that Tarsnap actually will survive for those 50 years, which should make us more likely to trust it, which... You see where this is going. This is one of those things where aggressively pricing too close to the bare metal costs might actually be a bad thing to a very important subset of users. One might even make the argument that, if the margins are as good as I'm supposing they are, then depending on the goals of the founder, Tarsnap is more likely to outlive S3 than S3 Tarsnap.
But again: Primarily a hobby.
https://support.google.com/accounts/answer/1187538?sjid=3244...
print those and password, stick the printout in a fireproof safe
Caution may be justified when it comes to doing this for something with as wide a surface area as a Google account. For me, if I'm going to have to compromise on 2FA somewhere anyway, I might as well go full hog and get an honest to goodness keyfile.
[1]: https://andrew-quinn.me/digital-resiliency-2025/#wait-what-a...
Maybe it's good for storing stuff that's illegal to possess?
If there's an simple but "solid" GUI backup tool with (true) PAYG I'd migrate away from Tarsnap, but there isn't one.
And Restic is good quality software.
You might be tempted to think: it's a popular service, it can't be that bad.
But, it really can be, and if you've not tried it yourself, you'll only find out when you need it. Which could be way too late.
I'm backing up about 8TiB of data nightly using BorgBackup[0] + InterServer[1] and pay $240/yr.
This gives me differential encrypted rotating backups that are 100% mine and do not lock me into any specific storage vendor.