Top
Best
New

Posted by tradertef 13 hours ago

I run multiple $10K MRR companies on a $20/month tech stack(stevehanov.ca)
703 points | 406 comments
hackingonempty 11 hours ago|
> The enterprise mindset dictates that you need an out-of-process database server. But the truth is, a local SQLite file communicating over the C-interface or memory is orders of magnitude faster than making a TCP network hop to a remote Postgres server.

I don't want to diss SQLite because it is awesome and more than adequate for many/most web apps but you can connect to Postgres (or any DB really) on localhost over a Unix domain socket and avoid nearly all of the overhead.

It's not much harder to use than SQLite, you get all of the Postgres features, it's easier to run reports or whatever on the live db from a different box, and much easier if it comes time to setup a read replica, HA, or run the DB on a different box from the app.

I don't think running Postgres on the same box as your app is the same class of optimistic over provisioning as setting up a kubernetes cluster.

andersmurphy 9 hours ago||
Sqlite smokes postgres on the same machine even with domain sockets [1]. This is before you get into using multiple sqlite database.

What features postgres offers over sqlite in the context of running on a single machine with a monolithic app? Application functions [2] means you can extend it however you need with the same language you use to build your application. It also has a much better backup and replication story thanks to litestream [3].

- [1] https://andersmurphy.com/2025/12/02/100000-tps-over-a-billio...

- [2] https://sqlite.org/appfunc.html

- [3] https://litestream.io/

The main problem with sqlite is the defaults are not great and you should really use it with separate read and write connections where the application manages the write queue rather than letting sqlite handle it.

andriy_koval 2 hours ago|||
> Sqlite smokes postgres on the same machine even with domain sockets [1]

for inserts only into singe table with no indexes.

Also, I didn't get why sqlite was allowed to do batching and pgsql was not.

andersmurphy 1 hour ago||
> for inserts only into singe table with

Actually, there are no inserts in this example each transaction in 2 updates with a logical transaction that can be rolled back (savepoint). So in raw terms you are talking 200k updates per second and 600k reads per second (as there's a 75%/25% read/write mix in that example). Also worth keeping in mind updates are slower than inserts.

> no indexes.

The tables have an index on the primary key with a billion rows. More indexes would add write amplification which would affect both databases negatively (likely PG more).

> Also, I didn't get why sqlite was allowed to do batching and pgsql was not.

Interactive transactions [1] are very hard to batch over a network. To get the same effect you'd have to limit PG to a single connection (deafeating the point of MVCC).

- [1] An interactive transaction is a transaction where you intermingle database queries and application logic (running on the application).

andriy_koval 56 minutes ago||
Thank you for clarification, I was wrong in my prev comment.

> - [1] An interactive transaction is a transaction where you intermingle database queries and application logic (running on the application).

could you give specific example why do you think SQlite can do batching and PG not?

maccard 7 hours ago||||
Thing is though - either of those options is still multiple orders of magnitude faster than running on a remote host. Either will work, either will scale way farther than you reasonably expect it to.
eduction 1 hour ago||||
> What features postgres offers over sqlite in the context of running on a single machine with a monolithic app

The same thing SQL itself buys you: flexibility for unforeseen use cases and growth.

Your SQLite benchmark is based in having just one write connection for SQLite but all eight writable connections for Postgres. Even in the context of a single app, not everyone wants to be tied down that way, particularly when thinking how it might evolve.

If we know our app would not need to evolve we could really maximize performance and use a bespoke database instead of an rdbms.

It seems a little aggressive for you to jump on a comment about how it’s reasonable to run Postgres sometimes with “SQLite smokes it in performance.” That’s true, when you can accept its serious constraints.

As a wise man once said, “Postgres is great and there's nothing wrong with using it!”

locknitpicker 9 hours ago||||
> Sqlite smokes postgres on the same machine even with domain sockets [1].

SQLite on the same machine is akin to calling fwrite. That's fine. This is also a system constraint as it forces a one-database-per-instance design, with no data shared across nodes. This is fine if you're putting together a site for your neighborhood's mom and pop shop, but once you need to handle a request baseline beyond a few hundreds TPS and you need to serve traffic beyond your local region then you have no alternative other than to have more than one instance of your service running in parallel. You can continue to shoehorn your one-database-per-service pattern onto the design, but you're now compelled to find "clever" strategies to sync state across nodes.

Those who know better to not do "clever" simply slap a Postgres node and call it a day.

andersmurphy 8 hours ago|||
> SQLite on the same machine is akin to calling fwrite.

Actually 35% faster than fwrite [1].

> This is also a system constraint as it forces a one-database-per-instance design

You can scale incredibly far on a single node and have much better up time than github or anthropic. At this rate maybe even AWS/cloudflare.

> you need to serve traffic beyond your local region

Postgres still has a single node that can write. So most of the time you end up region sharding anyway. Sharding SQLite is straight forward.

> This is fine if you're putting together a site for your neighborhood's mom and pop shop, but once you need to handle a request baseline beyond a few hundreds TPS

It's actually pretty good for running a real time multiplayer app with a billion datapoints on a 5$ VPS [2]. There's nothing clever going on here, all the state is on the server and the backend is fast.

> but you're now compelled to find "clever" strategies to sync state across nodes.

That's the neat part you don't. Because, for most things that are not uplink limited (being a CDN, Netflix, Dropbox) a single node is all you need.

- [1] https://sqlite.org/fasterthanfs.html

- [2] https://checkboxes.andersmurphy.com

shimman 2 hours ago|||
May be an "out" there question, but any tech book suggestions you'd recommend that can teach an average dev on how to build highly performant software with minimal systems?

I feel like the advice from people with your experience is worth way way way way more than what you'd hear from big tech. Like what you said yourself, big tech tends to recommend extremely complicated systems that only seem worth maintaining if you have a trillion dollar monopoly behind it.

wookmaster 6 hours ago||||
How do you manage HA?
andersmurphy 6 hours ago|||
Backups, litestream gives you streaming replication to the second.

Deployment, caddy holds open incoming connections whilst your app drains the current request queue and restarts. This is all sub second and imperceptible. You can do fancier things than this with two version of the app running on the same box if that's your thing. In my case I can also hot patch the running app as it's the JVM.

Server hard drive failing etc you have a few options:

1. Spin up a new server/VPS and litestream the backup (the application automatically does this on start).

2. If your data is truly colossal have a warm backup VPS with a snapshot of the data so litestream has to stream less data.

Pretty easy to have 3 to 4 9s of availability this way (which is more than github, anthropic etc).

rienbdj 5 hours ago|||
My understanding is litestream can lose data if a crash occurs before the backup replication to object storage. This makes it an unfair comparison to a Postgres in RDS for example?
andersmurphy 5 hours ago|||
Last I checked RDS uploads transaction logs for DB instances to Amazon S3 every five minutes. Litestream by default does it every second (you can go sub second with litestream if you want).
sudodevnull 5 hours ago|||
your understanding is very wrong. please read the docs or better yet the actual code.
locknitpicker 5 hours ago|||
> Backups, litestream gives you streaming replication to the second.

You seem terribly confused. Backups don't buy you high availability. At best, they buy you disaster recovery. If your node goes down in flames, your users don't continue to get service because you have an external HD with last week's db snapshots.

andersmurphy 4 hours ago||
If anything backups are the key to high availability.

Streaming replication lets you spin up new nodes quickly with sub second dataloss in the event of anything happening to your server. It makes having a warm standby/failover trivial (if your dataset is large enough to warrant it).

If your backups are a week old snapshots, you have bigger problems to worry about than HA.

rovr138 6 hours ago|||
No offense, you wait. Like everyone's been doing for years in the internet and still do

- When AWS/GCP goes down, how do most handle HA?

- When a database server goes down, how do most handle HA?

- When Cloudflare goes down, how do most handle HA?

The down time here is the server crashed, routing failed or some other issue with the host. You wait.

One may run pingdom or something to alert you.

locknitpicker 5 hours ago||
> When AWS/GCP goes down, how do most handle HA?

This is a disingenuous scenario. SQLite doesn't buy you uptime if you deploy your app to AWS/GCP, and you can just as easily deploy a proper RDBMS such as postgres to a small provider/self-host.

Do you actually have any concrete scenario that supports your belief?

runako 4 hours ago||
> SQLite doesn't buy you uptime if you deploy your app to AWS/GCP

This is...not true of many hyperscaler outages? Frequently, outages will leave individual VMs running but affect only higher-order services typically used in more complex architectures. Folks running an SQLite on a EC2 often will not be affected.

And obviously, don't use us-east-1. This One Simple Trick can improve your HA story.

locknitpicker 5 hours ago|||
> You can scale incredibly far on a single node

Nonsense. You can't outrun physics. The latency across the Atlantic is already ~100ms, and from the US to Asia Pacific can be ~300ms. If you are interested in performance and you need to shave off ~200ms in latency, you deploy an instance closer to your users. It makes absolutely no sense to frame the rationale around performance if your systems architecture imposes a massive performance penalty in networking just to shave a couple of ms in roundtrips to a data store. Absurd.

klooney 4 hours ago|||
You need regional state, or you're still back hauling to the db with all the lag.
andersmurphy 4 hours ago|||
That only solves read latency not write latency. Unless you don't care about consistency.
tl 7 hours ago||||
https://antonz.org/sqlite-is-not-a-toy-database/ — 240K inserts per second on a single machine in 2021. The problem you describe is real, but the TPS ceiling is wrong by three orders of magnitude on modern hardware.
pdhborges 3 hours ago||
Do you know why it is a toy? Because in a real prod environment after inserting 240k rows per second for a while you have to deal with the fact that schema evolution is required. Good luck migrating those huge tables with Sqlite ALTER table implementation
shimman 2 hours ago|||
This doesn't seem like a toy but you know... realizing different systems will have different constraints.

Not everyone needs monopolistic tech to do their work. There's probably less than 10,000 companies on earth that truly need to write 240k rows/second. For everyone else, we can focus on better things.

devmor 3 hours ago|||
Try doing that on a “real” DB with hundreds of millions of rows too. Anything more than adding a column is a massive risk, especially once you’ve started sharding.
pdhborges 2 hours ago||
Yes it might be risky. But most schema evolution changes can be done with no or minimal downtime even if you have to do then in multiple steps. When is a simple ALTER going to be totally unacetable if youare using Sqlite?
rpdillon 9 hours ago||||
I wonder what percentage of services run on the Internet exceed a few hundred transactions per second.
icedchai 8 hours ago|||
I’ve seen multimillion dollar “enterprise” projects get no where close to that. Of course, they all run on scalable, cloud native infrastructure costing at least a few grand a month.
egwor 9 hours ago|||
I think the better question to ask is what services peak at a few hundred transactions per second?
darkwater 6 hours ago|||
I mean, your "This is fine for" is almost literally the whole point of TFA, that you can go a long way, MRR-wise, with a simpler architecture.
noahbp 6 hours ago|||
FYI, the color gradient on your website is an easy tell that it was vibe coded: https://prg.sh/ramblings/Why-Your-AI-Keeps-Building-the-Same...
andersmurphy 6 hours ago||
A blog that's 11 years old and uses a minimalist CSS framework https://picocss.com ?

It's a static blog that renders markdown... there's literally nothing to code, let alone vibe code.

eurleif 11 hours ago|||
Looks like the overhead is not insignificant:

    Running 100,000 `SELECT 1` queries:
    PostgreSQL (localhost): 2.77 seconds
    SQLite (in-memory): 0.07 seconds
(https://gist.github.com/leifkb/1ad16a741fd061216f074aedf1eca...)
piker 10 hours ago|||
I love them both too but that might not be the best metric unless you’re planning to run lots of little read queries. If you’re doing CRUD, simulating that workflow may favor Postgres given the transactional read/write work that needs to take place across multiple concurrent connections.
locknitpicker 9 hours ago||
> I love them both too but that might not be the best metric unless you’re planning to run lots of little read queries.

Exactly. Back in the real world,anyone who is faced with that sort of usecase will simply add memory cache and not bother with the persistence layer.

piker 7 hours ago||
Not sure that’s always right either though. For example Mapbox used to use an SQLite database as the disk cache for map tile info. You cannot possibly store that amount of data in memory, so it’s a great use case.
bob1029 10 hours ago||||
This is mostly about thread communication. With SQLite you can guarantee no context switching. Postgres running on the same box gets you close but not all the way. It's still in a different process.
andersmurphy 8 hours ago||
This. Run an app on the same box as PG and you can easily be plagued by out of memory etc (as there's memory contention between the two processes).
madduci 9 hours ago||||
Most important is that that local SQLite gets proper backups, so a restore goes without issues
pdhborges 8 hours ago||
Gets proper backups if you back it up the right way https://sqlite.org/backup.html
locknitpicker 9 hours ago||||
A total performance delta of <3s on ~300k transactions is indeed the definition of irrelevant.

Also:

> PostgreSQL (localhost): (. .) SQLite (in-memory):

This is a rather silly example. What do you expect to happen to your data when your node restarts?

Your example makes as much sense as comparing Valkey with Postgres and proceed to proclaim that the performance difference is not insignificant.

iLoveOncall 10 hours ago||||
Why are you comparing PostgreSQL to an in-memory SQLite instead of a file-based one? Wow, memory is faster than disk, who would have thought?
eurleif 10 hours ago||
Because it doesn't make a difference, because `SELECT 1` doesn't need to touch the database:

    Running 100,000 `SELECT 1` queries:
    PostgreSQL (localhost): 2.71 seconds
    SQLite (in-memory): 0.07 seconds
    SQLite (tempfile): 0.07 seconds
(https://gist.github.com/leifkb/d8778422d450d9a3f103ed43258cc...)
oldsecondhand 9 hours ago|||
Why are you doing meaningless microbenchmarks?
saturn_vk 6 hours ago||
Are you claiming that this does not show the speed difference between socket vs in process communication?
j45 14 minutes ago||||
Queries for small SaaS are usually in the thousands of records, if not hundreds.
locknitpicker 9 hours ago||||
> Because it doesn't make a difference, because `SELECT 1` doesn't need to touch the database:

I hope you understand that your claim boils down to stating that SQLite is faster at doing nothing at all, which is a silly case to make.

eurleif 9 hours ago||
The original claim being discussed is about the overhead of an in-process database vs. a database server in a separate process, not about whether SQLite or PostgreSQL have a faster database engine.
nchmy 3 hours ago|||
How about pg on Unix socket?
eurleif 56 minutes ago||

    Running 100,000 `SELECT 1` queries:
    PostgreSQL (localhost): 2.84 seconds
    PostgreSQL (Unix socket): 1.93 seconds
    SQLite (in-memory): 0.07 seconds
    SQLite (tempfile): 0.06 seconds
(https://gist.github.com/leifkb/b940b8cdd8e0432cc58670bbc0c33...)
vixalien 9 hours ago||||
Would be nice to see PGLite[1] compared too

1: https://pglite.dev/

stavros 9 hours ago||||
It is insignificant if you're doing 100k queries per day, and you gain a lot for your 3 extra seconds a day.
Izmaki 9 hours ago|||
What a useful "my hello-world script is faster than your hello-world script" example.
pipeninja 21 minutes ago|||
You can't simply copy/paste a Postgres database though...also you'd be surprised how fast SQLite can be...I've used SQLite for projects where I just couldn't get the performance elsewhere. For example, I had a names database with over 100 million rows in it for converting names to diminutives (e.g. David to Dave) and the inverse...after I precomputed a metric ton of indices it went like a rocket. Sure the file was quite big but oh boy was it quick.
usernametaken29 10 hours ago|||
I have used SQLite with extensions in extreme throughput scenarios. We’re talking running through it millions of documents per second in order to do disambiguation. I won’t say this wouldn’t have been possible with a remote server, but it would have been a significant technical challenge. Instead we packed up the database on S3, and each instance got a fresh copy and hammered away at the task. SQLite is the time tested alternative for when you need performance, not features
jampekka 11 hours ago|||
> It's not much harder to use than SQLite, you get all of the Postgres features, it's easier to run reports or whatever on the live db from a different box, and much easier if it comes time to setup a read replica, HA, or run the DB on a different box from the app.

Isn't this idea to spend a bit more effort and overhead to get YAGNI features exactly what TFA argues against?

eikenberry 56 minutes ago|||
> It's not much harder to use than SQLite, you get all of the Postgres features [..]

More features is a net negative if you don't need those features. Ideally you want your DB to support exactly what you need and nothing more. Not typically realistic but the closer you can get the better.

leptons 13 minutes ago||
A feature you don't think you need today, might be one you actually need tomorrow. It would be short-sighted to choose some tech based only on what you need today. If the extra features don't cost you anything, I can't see that as a "net negative".
himata4113 1 hour ago|||
As someone who sets up a k3s cluster for a single user project I feel called out.

The thing is one you learn the technology, everything else seems more work than the "easy way".

jbverschoor 10 hours ago|||
I've been doing that for decades.. People seem to simply not know about unix architecture.

What I like about sqlite is that it's simply one file

dxxvi 6 hours ago||
But ... when you use the WAL mode, you have 3 files :-)
weego 9 hours ago|||
Thats just swapping another enterprise focused concern into the mix. Your database connection latency is absolutely not a concerning part of your system.
9rx 22 minutes ago||
Its not a significant concern because we've learned the hacks to work around it, but it is pretty freeing to not have to put hacks into your app.
Jolter 11 hours ago|||
I mean, you’re not wrong about the facts, but it’s also pretty trivial to migrate the data from SQLite into a separate Postgres server later, if it turns out you do need those features after all. But most of the time, you don’t.
pdhborges 10 hours ago||
I bet that takes more time than the 5 extra minutes you take to setup Postgres in the same box upfront.
SpaceNoodled 3 hours ago||
To export a database? Probably even faster. And that's ignoring the difference in performance.
pdhborges 2 hours ago||
So you are migrating from Sqlite to Postgres because you need it. What is the state of your product when you need to do this migration? Is your product non trivial? Are you now dependent on particular performance characteristics of Sqlite? Do you now need to keep your service running 24/7? Accounting for all of that takes way more than 5 minutes. The only way to beat that is if you still have a toy product and you can just export the database and import it and pray that it all works as a migration strategy.
winrid 3 hours ago|||
you also get a much better query execution engine, so if you need to run reports or analytics they will be faster
dizhn 10 hours ago|||
Author's own 'auth' project works with sqlite and postgres.
direwolf20 9 hours ago|||
IIRC TCP/IP through localhost actually benchmarked faster than Unix sockets because it was optimized harder. Might've been fixed now. Unix sockets gives you the advantage of authentication based on the user ID of who's connecting.

My experience with sqlite for server-based apps has been that as your app grows, you almost always eventually need something bigger than sqlite and need to migrate anyway. For a server-based app, where minimizing deployment complexity isn't an extremely important concern, and with mixed reads and writes, it's rarely a bad idea to use Postgres or MariaDB from the start. Yes there are niche scenarios where sqlite on the server might be better, but they're niche.

lichenwarp 9 hours ago||
ORDERS OF MAGNITUDE NEWS
senko 10 hours ago||
If this sounds like basic advice, consider there are a lot of people out there that believe they have to start with serverless, kubernetes, fleets of servers, planet-scale databases, multi-zone high-availability setups, and many other "best practices".

Saying "you can just run things on a cheap VPS" sounds amateurish: people are immediately out with "Yeah but scaling", "Yeah but high availability", "Yeah but backups", "Yeah but now you have to maintain it" arguments, that are basically regurgitated sales pitches for various cloud platforms. It's learned helplessness.

Lalabadie 6 minutes ago||
More and more, I'm seeing this issue with agents-based workflows as well. The training set is full (in quantity and in proportion) of codebases that are organized for very large teams, so that's what most prompted architectures lead to.

In my case I'm seeing it a lot on the front-end side. My clients end up with single-page apps that install Shadcn, Tailwind, React, React Router, Axios, Zod, React Form and Vite, all to center a some input elements and perform a few in-browser API calls. It's a huge maintenance burden even before they start getting value out of it.

These large setups are often a correct answer, but not the right one for the situation.

operatingthetan 2 hours ago|||
When I was a consultant we would plan out 25 piece cloud deployments for little pie in the sky apps that would never see more than 200 users. Everyone has been trained that 'cloud' means a lot of expensive moving parts and doesn't stop to plan their deployments beyond that.
echelon 1 hour ago||
Digital ocean has Kubernetes ffs.

It's all of five minutes to write a deployment yaml and ingress and have literally anything on the web for a handful of dollars a month.

I've written rust services doing 5k QPS on DO's cheapest kube setup.

It's not rocket science.

Serverless node buns with vite reacts are more complicated than this.

Ten lines of static, repeatable, versioned yaml config vs a web based click by click deploy installer with JavaScript build pipelines and magical well wishes that the pathing and vendor specific config are correct.

And don't tell me VPS FTP PHP or sshing into a box to special snowflake your own process runner are better than simple vanilla managed kube.

You can be live on the web from zero in 5 minutes with Digital Ocean kube, and that's counting their onboarding.

senko 34 minutes ago||
> It's not rocket science.

Neither is "apt install caddy".

GorbachevyChase 3 hours ago|||
Don’t forget that people involved in information technology procurement will pay very large sums of the company’s money to not have to understand anything.
kandros 8 hours ago|||
“Cloud-native natives” had so much free plans that had no need to understand what a basic app really needs.
jayd16 2 hours ago|||
Hmm backups seems like an important one.
andersmurphy 21 minutes ago|||
Litestream [1] is quick to set up and has point in time backup to the second.

- [1] https://litestream.io

mamcx 1 hour ago||||
Yes, and is super easy.

I do like this: cron to run the backup and then rsync to https://www.rsync.net, then an after script that check it was run and post to my telegram the analysis.

That is.

Scaled 1 hour ago||
Another good option is Restic, since snapshots let you go back in time. That is useful in case you accidentally delete/break something and you're not quite fast enough to restore from backup before the next cron runs.
cj 2 hours ago||||
“Guys, we need to postpone our beta launch! We need another week to implement a backup strategy with point in time recovery!”

You don’t need backups until you have customers.

jayd16 2 hours ago||
So go live without testing the backup in the beta at all?
cj 2 hours ago||
Yes! Why build a backup process before you know you have data worth backing up.
tremon 1 hour ago|||
The data recovery process needs to be validated too, preferably before customer data actually needs to be recovered.
jayd16 2 hours ago|||
Why go live if you don't have a reasonable expectation of users?

Worrying about HA when you don't have customers that need it is one thing, but I wouldn't want to be in a place where I have to put a banner on the website asking users to please make a new account because we had an oopsie.

McGlockenshire 1 hour ago|||
And also incredibly trivial to fix. Most VPS providers include their own backup services, and for the rest there's rsnapshot and some other cheaper VPS somewhere else to keep it "off site."

Too many have forgotten what it means to administrate a single system. You can do a lot with very simple tooling.

throw-the-towel 6 hours ago|||
And now big tech often doesn't even have the high availability to show for all that complexity.
lamasery 5 hours ago|||
The better availability and scalability of “the cloud” always relied on so many things being done and maintained just right by just the right people that I don’t think it’s ever been broadly true.

You get such a large performance malus and increase in complexity right from the start with The Cloud that it’ starts at a serious deficit, and only eventually maybe overcomes that to be overall beneficial with the right workload, people, and processes. Most companies are lacking minimum two of those to justify “the cloud”.

And that’s without even considering the cost.

What I think it actually is, is a way for companies that can’t competently (I mean at an organizational/managerial level) maintain and adequately make-available computing resources, to pay someone else to do it. They’re so bad at that, that they’re willing to pay large costs in money, performance, and maybe uptime to get it.

faangguyindia 3 hours ago|||
Remember if you ever feel disappointed, the king of scale Google playstore updates stats once a day
CodesInChaos 33 minutes ago||
Not just stats. Configuration changes take around a day to take effect as well. Figuring out how to do authentication and permissions was such a pain. A half-assed integration with google cloud doesn't quite behave like the normal google cloud. Vague error messages. And every time you changed something you couldn't be certain your new setting was incorrect until you waited for an approximate day.
ramraj07 10 hours ago|||
Apparently the phrase cargo cult software engineering is not common anymore. Explains these things perfectly.
rcbdev 9 hours ago|||
I end up explaining this term to every junior developer that doesn't know it sooner or later, the same way I explain bike shedding to all PMs that don't know it... often sooner, rather than later.

It seems to really help if you can put a term to it.

throwatdem12311 7 hours ago|||
Heh, I was gonna say cargo cult might mean something different in today’s programming landscape but then I thought about it for a second and it actually reinforces th meaning.
InfraScaler 9 hours ago||
I don't know what to say. People keep saying these engineers exist and here I am not having seen a single, and I follow many indie hackers communities.
dwedge 9 hours ago|||
A devops coworker found my blog and asked me how I host it, is it Kubernetes. I told him it's a dedicated server and he seemed amazed. And this was just a blog. It's real
manquer 1 hour ago|||
I heard the same story many times before.

Devops engineers did not know 101 of cable management or what even a cage nut is and being amazed to see a small office running 3 used dell servers bought dirt cheap, and shocked when it sounded like a air raid when they booted up, thought hot swapping was just magic.

It is always the case - earlier in the 80s-90s programmers were shaking their heads when people stopped learning assembly and trusted the compilers fully

This is nothing and hardly is shocking? new skills are learnt only if valuable otherwise one layer below seems like magic.

InfraScaler 9 hours ago|||
Does your coworker run a blog on k8s?
dwedge 9 hours ago||
None of them self host anything at all. It's like that skill was totally skipped. But they advise and consult on infra
Hnrobert42 8 hours ago||
Well, by the time you are hiring a dedicated infra role, you should be past the single VPS stage.
dwedge 8 hours ago|||
My point is that none of these coworkers have ever been at that stage. He was surprised about me hosting something because he seems to think hosting is expensive and for companies. Straight in at the top end of k8s and microservices
wookmaster 6 hours ago||
There's plenty of people that got a CS degree and went to work and this is only a job for them, they have no interest outside of work. Unfortunately I'm not one of those people so I get off work troubleshooting issues to troubleshoot issues at home lol though there aren't that many just my choice to self host cameras through HomeKit sometimes falls apart somehow but im also squeezing every KB or RAM out of that beelink I can.
dwedge 5 hours ago||
Don't get me wrong I don't think a homelab is necessary, but I think people who have only done this in a big corporate environment are doing themselves a disservice - either a small company or a homelab can fix that itch, but like you say a lot of people don't have the interest
ryandrake 2 hours ago||
It's like a developer who went straight from knowing nothing about programming to JavaScript and never looked back. They missed C, they missed assembly, they missed cycle counting, they missed knowing what your memory footprint is at all times in your application, they missed keeping your inner loops tight and in the cache... It's not just "oh this person doesn't have a nerdy hobby." These are real skill holes in [many] developers' backgrounds, just like knowing how to host something on bare metal+OS is a real skill hole for some devops people.
deaux 7 hours ago||||
I've worked at a startup that could've trivially ran on a single VPS and kept things simple yet had a dedicated infra guy using a full k8s setup.
Zetaphor 2 hours ago|||
I once interviewed for a small print shop that was proudly throwing out every AWS product name when describing their stack. They serve a few hundred customers and their previous system worked for decades entirely over email and a web form. I decided I wasn't interested around the point where he explained how they're migrating to lambdas
ryandrake 2 hours ago||
LOL, I'm laughing and I wish it was because this was funny rather than terrifying.
skeeter2020 6 hours ago||||
hey - devs aren' the only ones who fall in the premature optimization trap! Everyone from the CTO envisioning the scale of their future startup down to the IT intern is influenced by this, plus it's in the best interest of a dedicated infra guy to have a lot of dedicated infra. If you don't manage people K8s can become your kingdom and the size a badge of importance.
deaux 4 hours ago||
In this case I think it was a bit of CTO envisioning scale, then a bit of CTO genuinely overestimating what is needed, plus a good amount of CTO just being the average nerdy dev who likes the idea of shiny toys and cool sounding stuff - "we're running on k8s!".

A year or so after I left they ran out of money. They would've lasted longer if the infra guy would've just stayed the backend guy and helped get projects done more quickly instead of shiny k8s setups for projects with a dozen end-users per day. Recently I saw that the CTO has started a new startup - and ironically the only guy who he took with him onto the new team looks to have been the infra guy!

I don't blame infra guy, he genuinely believed he was doing the right thing.

InfraScaler 5 hours ago|||
How else are you going to put k8s on your CV? :-P
Dumbledumb 9 hours ago|||
Because I think precisely the indie hacker community is not as keen to default to the big-tech stacks, because those are neither indie, nor hack-y :)
KronisLV 10 hours ago||
> I use Linode or DigitalOcean. Pay no more than $5 to $10 a month. 1GB of RAM sounds terrifying to modern web developers, but it is plenty if you know what you are doing.

If you get one dedicated server for multiple separate projects, you can still keep the costs down but relax those constraints.

For example, look at the Hetzner server auction: https://www.hetzner.com/sb/

I pay about 40 EUR a month for this:

  Disk: 736G / 7.3T (11%)
  CPU: Intel Core i7-7700 @ 8x 4.2GHz [42.0°C]
  RAM: 18004MiB / 64088MiB
I put Proxmox on it and can have as many VMs as the IO pressure of the OSes will permit: https://www.proxmox.com/en/ (I cared mostly about storage so got HDDs in RAID 0, others might just get a server with SSDs)

You could have 15 VMs each with 4 GB of RAM and it would still come out to around 2.66 EUR per month per VM. It's just way more cost efficient at any sort of scale (number of projects) when compared to regular VPSes, and as long as you don't put any trash on it, Proxmox itself is fairly stable, being a single point of failure aside.

Of course, with refurbished gear you'd want backups, but you really need those anyways.

Aside from that, Hetzner and Contabo (opinions vary about that one though) are going to be more affordable even when it comes to regular VPS hosting. I think Scaleway also had those small Stardust instances if you want something really cheap, but they go out of stock pretty quickly as well.

nchmy 3 hours ago||
Agreed. Though, now that hetzner has increased pricing, OVH is quite competitively priced and has some newer hardware available.
doubleorseven 10 minutes ago||
everytime i want to put something in my dishwasher i pray to god it's not full and clean. same with OVH, prayer-wise.
utopiah 4 hours ago|||
Why VMs over containers?
KronisLV 1 hour ago||
Mostly to have stronger separation, I'm sure the person who prefers VM-per-project also has their own reasons.

I just have a few large VMs, each a different environment with slightly different ways how I treat them - the prod ones get more due diligence and being careful, whereas all of the dev ones (including where I host Gitea, Woodpecker CI, Nextcloud, Kanboard, Uptime Kuma etc.) I mess around with the configuration in and do restarts more often. I personally used to run a Docker Swarm cluster, but now just use Docker Compose with Ansible directly, still multiple stacks per each of those servers, dead simple

So my setup ended up being:

  * VPS / VMs - an environment, since don't really need replication/distributed systems at my scale
  * container stack (Compose/Swarm) - a project, with all its dependencies, though ingress is a shared web server container per environment
  * single container - the applications I build, my own are built on top of a common Ubuntu LTS base more often than not, external ones (like Nextcloud and tbh most DBs) are just run directly
Works very well, plus containers allow me to easily have consistent configuration management, networking, resource limits and persistent storage.
compounding_it 9 hours ago|||
What do you do about ipv4 ? Do you also use a routing VM to manage all that ?

It’s very interesting how people rent large VMs with a hypervisor. I’m wondering if licenses for VPS have any clauses preventing this for commercial scale.

mbesto 1 hour ago|||
Why not just Nginx Proxy Manager? Solves both the Proxy issue as well as TLS/SSL.

https://nginxproxymanager.com/

deniska 4 hours ago||||
I help my dad run a proxmox setup on a server he's got from a local craigslist analog and put on a co-location in a datacenter. It only uses a single public IP. All VMs are in a "virtual intranet", and the host itself acts like a router (giving local IP addresses to VMs via dnsmasq, routing VM internet access via NAT, forwarding specific outside ports to specific VMs). For example ports 80, 443 are given to a dedicated "nginx vm" which then will route a request to a specific VM depending on the hostname.
KronisLV 7 hours ago||||
Hetzner has some docs: https://docs.hetzner.com/robot/dedicated-server/ip/additiona...

Since I only needed about 3 VMs (though each being a bit beefier, running containers on them, a web server sitting in front of those with vhosts as ingress), I could give each VM its own IPv4 address and it didn’t end up being too expensive for my use case. Would be a bit different for someone who wants many small VMs.

hkpack 6 hours ago|||
There are security benefits of not having public IPs on every VM.

I assign few VMs public IPs and use them as ingress / SSL termination / load balancer for my workloads running on VMs with only internal IPs.

I personally use kvm with libvirt and manage all these with Ansible.

DeathArrow 1 hour ago||
Wouldn't be easier and more efficient to just run docker containers?
sbarre 1 hour ago||
It depends on what you're doing. Proxmox gives you the flexibility to figure it out as you go.

If you have a plan from the start and you know what you'll need and you're pretty confident it won't change, then sure.

If you want a box that you can slice and dice however you want (VMs, containers, etc) then something like Proxmox might be worth it.

elwebmaster 7 minutes ago||
Can OP write another article focusing on the revenue side, how to actually bring in $10K MRR, forget about the tech stack, AI can solve that.
gobdovan 11 hours ago||
Nice list! I'd say the SQLite with WAL is the biggest money saver mentioned.

One note: you can absolutely use Python or Node just as well as Go. There's Hetzner that offers 4GB RAM, 10TB network (then 1$/TB egress), 2CPUs machines for 5$.

Two disclaimers for VPS:

If you're using a dedicated server instead of a cloud server, just don't forget to backup DB to a Storage box often (3$ /mo for 1TB, use rsync). It's a good practice either way, but cloud instances seem more reliable to hardware faults. Also avoid their object store.

You are responsible for security. I saw good devs skipping basic SSH hardening and get infected by bots in <1hr. My go-to move when I spin up servers is a two-stage Terraform setup: first, I set up SSH with only my IP allowed, set up Tailscale and then shutdown the public SSH IP entrypoint completely.

Take care and have fun!

jon-wood 7 hours ago||
Personally for backups I’d avoid using a product provided by the same company as the VM I’m backing up. You should be defending against the individual VM suffering corruption of some kind, needing to roll back to a previous version because of an error you made, and finally your VM provider taking a dislike to you (rationally or otherwise) and shutting down your account.

If you’re backing up to a third party losing your account isn’t a disaster, bring up a VM somewhere else, restore from backups, redirect DNS and you’re up and running again. If the backups are on a disk you can’t access anymore then a minor issue has just escalated to an existential threat to your company.

Personally I use Backblaze B2 for my offsite backups because they’re ridiculously cheap, but other options exist and Restic will write to all of them near identically.

joelthelion 7 hours ago|||
> You are responsible for security. I saw good devs skipping basic SSH hardening and get infected by bots in <1hr. My go-to move when I spin up servers is a two-stage Terraform setup: first, I set up SSH with only my IP allowed, set up Tailscale and then shutdown the public SSH IP entrypoint completely.

Note that you don't need all of that to keep your SSH server secure. Just having a good password (ideally on a non-root account) is more than enough.

chillfox 5 hours ago|||
Disable password auth and go with key based, it's easier and more secure.
gobdovan 6 hours ago|||
I'd call it unnecessary exposure. Under both modern threat models and classic cybernetic models (check out law of requisite variety) removing as much surface attack area as possible is optimal. Especially disabling passwords in SSH is infosec 1o1 these days. No need to worry about brute force attacks, credential stuffing, or simple human error, which was the cause of all attacks I've seen directly.

It's easier to add a small config to Terraform to make your config at least key-based.

t_mahmood 11 hours ago|||
About security, wall of shame story,

Once I had Postgresql db with default password on a new vps, and forgetting to disable password based login, on a server with no domain. And it got hacked in a day, and was being used as bot server. And that was 10 years ago.

Recently deployed server, and was getting ssh login attempts within an hour, and it didn't had a domain. Fortunately, I've learned my lesson, and turned of password based login as soon as the server was up and running.

And similar attempts bogged down my desktop to halt.

Having an machine open to the world is now very scary. Thanks God for service like tailscale exists.

dwedge 9 hours ago||
Nothing would happen, ssh is designed to be open to the world. Using tailscale or a vpn to hide your IP is fine, but using tailscale ssh maybe not.
t_mahmood 3 hours ago||
Well continuous attempts definitely bogged down my desktop pretty bad. Also, getting OOM on a 64gb machine multiple times a day is quiet annoying.

And one simple mistake, and we're screwed

ericpauley 1 hour ago||
If sshd is OOMing on 64GB something else is going on…
dwedge 9 hours ago|||
I need more info about devs getting infected over ssh in less than an hour. Unless they had a comically weak root password or left VNC I don't believe it at all
gobdovan 6 hours ago||
Yes, <1h was a weak root password. All attacks I've seen directly were always user error. The point is effectively removing attack surfaces rather than enhancing security in needlessly exposed internet-facing protocols.
dwedge 5 hours ago||
It must have been comically weak, like "root", "password" or something like that
selcuka 11 hours ago|||
> Nice list! I'd say the SQLite with WAL is the biggest money saver mentioned.

Funny you said that. I migrated an old, Django web site to a slightly more modern architecture (docker compose with uvicorn instead of bare metal uWSGI) the other day, and while doing that I noticed that it doesn't need PostgreSQL at all. The old server had it already installed, so it was the lazy choice.

I just dumped all data and loaded it into an SQLite database with WAL and it's much easier to maintain and back up now.

gobdovan 11 hours ago||
Yep, it literally is a one-file backup. And runtime it's so much faster for apps where write serialisation is acceptable.
bilinguliar 2 hours ago|||
Sqlite + Litestream for backups.
InfraScaler 9 hours ago|||
Does WAL really offer multiple concurrent writers? I know little about DBs and I've done a couple of Google searches and people say it allows concurrent reads while a write is happening, but no concurrent writers?

Not everybody says so... So, can anyone explain what's the right way to think about WAL?

gobdovan 9 hours ago|||
No, it does not allow concurrent writes (with some exceptions if you get into it [0]). You should generally use it only if write serialisation is acceptable. Reads and writes are concurrent except for the commit stage of writes, which SQLite tries to keep short but is workload- and storage-dependent.

Now this is more controversial take and you should always benchmark on your own traffic projections, but:

consider that if you don't have a ton of indexes, the raw throughput of SQLite is so good that on many access patterns you'd already have to shard a Postgres instance anyway to surpass where SQLite single-write limitation would become the bottleneck.

[0] https://www.sqlite.org/src/doc/begin-concurrent/doc/begin_co...

pixelesque 9 hours ago|||
No it doesn't - it allows a single writer and concurrent READs at the same time.
InfraScaler 9 hours ago||
Thanks! even I run a sqlite in "production" (is it production if you have no visitors?) and WAL mode is enabled, but I had to work around concurrent writes, so I was really confused. I may have misunderstood the comments.
yomismoaqui 9 hours ago||
Writes are super fast in SQLite even if they are not concurrent.

If you were seeing errors due to concurrent writes you must adjust BUSY_TIMEOUT

InfraScaler 5 hours ago||
Thanks I'll have a look. For now I just had a sane retry strategy. Not that I have any traffic, mind you :-)))
egwor 8 hours ago|||
First step is to get ssh setup correctly, and second step is to enable a firewall to block incoming connections on everything except the key ports (ssh but on a different port/web/ssl). This immediately eliminates a swathe of issues!
bornfreddy 7 hours ago||
Also use fail2ban. If nothing else to decrease the amount of junk in logs.
asymmetric 10 hours ago|||
> Also avoid their object store.

Curious as to why you say this. I’m using litestream to backup to Hetzner object storage, and it’s been working well so far.

I guess itt’s probably more expensive than just a storage box?

Not sure but I also don’t have to set up cron jobs and the like.

gobdovan 9 hours ago||
Historical reliability and compatibility. They claimed they were S3 compatible, but they were requiring deprecated S3 SDKs, plus S3 advanced features are unimplemented (but at least they document it [0]). There was constant timeouts for object creation and updates, very slow speeds and overall instability. Even now, if you check out r/hetzner on reddit, you'll see it's a reliability nightmare (but take it with a grain of salt, nobody reports lack of problems). Not as relevant for DB backups, but billing is dumb, even if you upload a 1KB file, they charge you for 64KB.

At least with Storage Box you know it's just a dumb storage box. And you can SSH, SFTP, Samba and rsync to it reliably.

[0] https://docs.hetzner.com/storage/object-storage/supported-ac...

nurgalive 8 hours ago||
When creating a VPS on Hetzner, it lets you by default to configure the key auth only.
jimnotgym 5 hours ago||
From memory this is the case on DO as well
leaves83829 1 hour ago||
I quite like the websequencediagram. looks like a cool product!

He's mainly talking about the tech implementation which is the easy part.

the hard part of creating a business is finding a problem valuable enough to solve and reaching the users who need that problem solved. that's where the real value is.

geetee 1 hour ago|
This is the most frustrating problem I have. I do my 40 hours per week, play with my kids, relax with the wife, and play some video games. I don't really have any other problems besides not enough time in the day. And yet when I learn about some domain specific problem, it is blindingly obvious.
f311a 11 hours ago||
There are zero reasons to limit yourself to 1GB of RAM. By paying $20 instead of $5 you can get at least 8gb of RAM. You can use it for caches or a database that supports concurrent writes. The $15 difference won’t make any financial difference if you are trying to run a small business.

Thinking about on how to fit everything on a $5 VPS does not help your business.

jampekka 11 hours ago||
$15 is not exactly zero, is it? If you don't need more than 1GB, why pay anything for more than 1GB?

I recall running LAMP stacks on something like 128MB about 20 years ago and not really having problems with memory. Most current website backends are not really much more complicated than they were back then if you don't haul in bloat.

bdelmas 10 hours ago|||
It is. With 10k MRR it represents 0.15% of the revenue. Having the whole backend costing that much for a company selling web apps is like it’s costing zero.
jvuygbbkuurx 9 hours ago|||
You probably don't make 10k MMR on day one. If you make many small apps, it can make sense to learn how to run things lean to have 4x longer runway per app.
mlyle 3 hours ago|||
The runway is going to be your time and attention span, not $10/mo.

I don't know what you value your time or opportunity cost as... but the $10/mo doesn't need to save very many minutes of your time deferring dealing with a resource constraint or add too much reliability to pay off.

If resource limitations end up upsetting one end user, that costs more than $10.

jampekka 2 hours ago||
This assumes you have to spend any time or attention worrying. 1GB is plenty of memory for backend type stuff.

And most VPSs allow increasing memory with a click of a button and a reboot.

r0fl 4 hours ago|||
Overspending for the sake of overspending is not smart in life or business.
elAhmo 9 hours ago||||
Saving 15 USD on 10k+ USD MMR is ridiculous.
cbdevidal 8 hours ago|||
Saving 15 USD on 0 USD MMR while still building the business is priceless. Virtually infinite runway.
jeremyjh 1 hour ago||
Only if your time is worthless and someone else is paying your living expenses.
compounding_it 9 hours ago|||
Given how much revenue depends on the experience of a web app and loading times, I’d be happy to pay 100$ a month on that revenue if I don’t have to sacrifice a second of additional loading time no matter how clever I was optimizing it.
kijin 7 hours ago|||
That 1 second of loading time probably has more to do with heavy frontends and third-party scripts, than the backend server's capacity.

$100 is peanuts to most businesses, of course. But even so, I'd rather spend it on fixing an actual bottleneck.

r0fl 4 hours ago|||
Not all businesses depend on milliseconds being shaved off the loading times

For example: Ticketmaster makes a ton of money and their site is complete dogshit.

kaliqt 10 hours ago|||
There’s a happy medium and $5 for 1GB RAM just isn’t it.
cbdevidal 8 hours ago|||
Be sure to inform the author of the article who is currently making money on his 1GB VPS that he hasn’t found a happy medium
lijok 10 hours ago|||
Not a very strong argument now is it?
pas 10 hours ago||
if the project already has positive revenue then arguably the ability to capture new users is worth a lot, which requires acceptable performance even when a big traffic surge is happening (like a HN hug of attention)

if the scalability is in the number of "zero cost" projects to start, then 5 vs 15 is a 3x factor.

100ms 10 hours ago|||
NVME read latency is around 100usec, a SQLite3 database in the low terabytes needs somewhere between 3-5 random IOs per point lookup, so you're talking worst case for an already meaningful amount of data about 0.5ms per cold lookup. Say your app is complex and makes 10 of these per request, 5 ms. That leaves you serving 200 requests/sec before ever needing any kind of cache.

That's 17 million hits per day in about 3.9 MiB/sec sustained disk IO, before factoring in the parallelism that almost any bargain bucket NVME drive already offers (allowing you to at least 4x these numbers). But already you're talking about quadrupling the infrastructure spend before serving a single request, which is the entire point of the article.

f311a 9 hours ago|||
You won't get such numbers on a $5 VPS, the SSDs that are used there are network attached and shared between users.
100ms 9 hours ago||
Not quite $5, but a $6.71 Hetzner VPS

    # ioping -R /dev/sda

    --- /dev/sda (block device 38.1 GiB) ioping statistics ---
    22.7 k requests completed in 2.96 s, 88.8 MiB read, 7.68 k iops, 30.0 MiB/s
    generated 22.7 k requests in 3.00 s, 88.8 MiB, 7.58 k iops, 29.6 MiB/s
    min/avg/max/mdev = 72.2 us / 130.2 us / 2.53 ms / 75.6 us
100ms 8 hours ago|||
Rereading this, I have no idea where 3.9 MiB/sec came from, that 200 requests/sec would be closer to 8 MiB/sec
nlitened 9 hours ago|||
> There are zero reasons to limit yourself to 1GB of RAM

There is a good reason: teaching yourself not to over-engineer, over-provision, or overthink, and instead to focus on generating business value to customers and getting more paying customers. I think it’s what many engineers are keen to overlook behind fun technical details.

locknitpicker 9 hours ago||
> There is a good reason: teaching yourself not to over-engineer, over-provision, or overthink, (...)

This is specious reasoning. You don't prevent anything by adding artificial constraints. To put things in perspective, Hetzner's cheapest vCPU plan comes with 4GB of RAM.

sgarland 7 hours ago||
If I give you a box with 1 GiB of RAM, you are literally forced to either optimize your code to run in it, or accept the slowdown from paging. How is this specious?
filleduchaos 1 hour ago|||
Why not a box with 128MB of RAM then?
locknitpicker 5 hours ago|||
> If I give you a box with 1 GiB of RAM, you are literally forced to either optimize your code to run in it, or accept the slowdown from paging. How is this specious?

It is specious reasoning. Self-imposing arbitrary constraints don't make you write good, performant code. At most it makes your apps run slower because they will needlessly hit your self-impose arbitrary constraints.

If you put any value on performant code you just write performance-oriented code, regardless of your constraints. It's silly to pile on absurd constraints and expect performance to be an outcome. It's like going to the gym and work out with a hand tied behind your back, and expect this silly constraints to somehow improve the outcome of your workout. Complete nonsense.

And to drive the point home, this whole concern is even more perplexing as you are somehow targeting computational resources that fall below free tiers of some cloud providers. Sheer lunacy.

sgarland 4 hours ago|||
Constraints provide feedback. Real-world example from my job: we have no real financial constraints for dev teams. If their poor schema or query design results in SLO breaches, and they opt to upsize their DB instead of spending the effort to fix the root problem, that is accepted. They have no incentive to do otherwise, because there are no constraints.

I think your analogy is flawed; a more apt one would be training with deliberately reduced oxygen levels, which trains your body to perform with fewer resources. Once you lift that constraint, you’ll perform better.

You’re correct that you can write performant code without being required to do so, but in practice, that is a rare trait.

ufocia 4 hours ago|||
The gym analogy fails. Isolation exercises are almost exactly what you described. They target individual muscles to maximize hypertrophy, i.e. "improve the outcome of your workout."
littlecranky67 10 hours ago|||
I think we have to re-think and re-evaluate RAM usage on modern systems that use swapping with CPU-assisted page compression and fast, modern NVMe drives.

The Macbook Neo with 8GB RAM is a showcase of how people underistimated its capabilities due to low amount of RAM before launch, yet after release all the reviewers point to a larger set of capabilities without any issues that people didn't predict pre-launch.

f311a 9 hours ago|||
$5 VPS disks are nowhere near macbooks, they are shared between users and often connected via network. They don't seat close to CPU.
sgt 9 hours ago||||
Also, macOS is generally exceptional at caching and making efficient use of the fast solid state chips.
ufocia 4 hours ago|||
Memory compression sounds like going back to DOS days. I think we're better off with writing tighter more performant code with no YAGNI. Alas, vibe coding will probably not get us there anytime soon.
jlokier 19 minutes ago||
Apple laptop CPUs have hardware memory compression and exceptionally high memory bandwidth for a CPU, and with their latest devices, very high storage bandwidth for a consumer SSD, so the equation is very different from the old DOS days.
AussieWog93 9 hours ago|||
Or better yet, go with a euro provider like Hetzner and get 8GB of RAM for $10 or so. :)

Even their $5 plan gives 4GB.

walthamstow 7 hours ago|||
I've been using Linode for years and just yday went to use Hetzner for a new VPS and they wanted my home address and passport. No thanks.
arcanemachiner 8 hours ago|||
They also have servers in the US (east and west coast).
AussieWog93 7 hours ago||
I don't think they offer their cheapest options (CX*) outside of Germany/Finland though. Singapore and USA are a bit pricier.
layer8 5 hours ago|||
The reason would be YAGNI. Apparently 1GB doesn’t constitute an actual limit for OP’s use case. I’m sure he’ll upgrade if and when the need arises.
TiredOfLife 10 hours ago|||
Hetzner, OVH and others offer 4-8gb and 2-4 cores for the same ~5$
afro88 10 hours ago|||
It doesn't look like they think about how to make it fit though. They just use a known good go template
pier25 6 hours ago|||
Where can you get 8GB for $20?
ethbr1 6 hours ago|||
> There are zero reasons to limit yourself to 1GB of RAM. By paying $20 instead of $5 you can get at least 8gb of RAM.

In my head, I call this the 'doubling algorithm'.

If there's anything that's both relatively cheap and useful, but where "more" (either in quality or quantity) has additional utility, 2x it.

Then 2x it again.

Repeat until either: the price change becomes noticeable or utility stops being gained.

Tl;dr -- saving order-of single dollars is rarely worth the tradeoffs.

wackget 4 hours ago||
> "There are zero reasons to limit yourself to 1GB of RAM"

> Immediately proposes alternative which is literally 4x the cost.

Leomuck 2 hours ago||
I do agree that the overall tendency towards cloud has made things much more complicated and expensive than they need to be in many cases. Cloud has its place, but so do simple server instances. Many projects won't reach any kind of scale that would exceed the capabilities of a medium-sized VPS. We're running a page with 600k users at work that could easily fit on a 30€ VPS. Instead, we moved to AWS and are now paying 800€ for it. No benefits whatsoever.

So yea, stick with what worked for decades if you don't see a reason not to. Also, I remember reading that StackOverflow runs on a bunch of super powerful root servers?

elias1233 1 hour ago||
With the Oracle Cloud Free Tier you can do this for a whopping $0/month. They give you a 4 core ARM CPU and 24 GB RAM for free, plus 200 GB storage.
taffydavid 9 hours ago|
> I bought a GitHub Copilot subscription in 2023, plugged it into standard VS Code, and never left. I tried Cursor and the other fancy forks when they briefly surpassed it with agentic coding, but Copilot Chat always catches up.

> Here is the trick that you might have missed: somehow, Microsoft is able to charge per request, not per token. And a "request" is simply what I type into the chat box. Even if the agent spends the next 30 minutes chewing through my entire codebase, mapping dependencies, and changing hundreds of files, I still pay roughly $0.04.

> The optimal strategy is simple: write brutally detailed prompts with strict success criteria (which is best practice anyway), tell the agent to "keep going until all errors are fixed," hit enter, and go make a coffee while Satya Nadella subsidizes your compute costs.

Wow. I'll definitely be investigating this!

satvikpendem 5 hours ago||
People get banned abusing this per request strategy so be careful. This guy was running super long prompts per request and is somehow surprised why they got banned.

https://old.reddit.com/r/GithubCopilot/comments/1r0wimi/if_y...

estetlinus 8 hours ago|||
The author refers to gpt 4o and sonnet 3.5 as SOTA. I’d take the AI tips with a grain of salt tbh. But I’d love it if it’s true
pontussw 4 hours ago||
It works with all models, some have a cost multiplier like Opus 4.6 ”charges” 3 requests per prompt, but its still only for the prompts you send yourself - even if it works on the issue for hours. GPT-5.4 has no multiplier i.e. costs 0.04$ per prompt.

Worth noting however that they are starting to introduce rate limits lately so you might struggle to run multiple concurrent sessions, though this is very inconsistent for me. Some days I can run 3-4 sessions concurrently all day, other times I get rate limited if I run one non-stop..

taffydavid 9 hours ago||
Thanks for the downvote kind stranger. Not sure what I said to qualify
jodrellblank 4 hours ago||
You copypasted three paragraphs from the article and you contributed "wow".
More comments...