Top
Best
New

Posted by tradertef 21 hours ago

I run multiple $10K MRR companies on a $20/month tech stack(stevehanov.ca)
819 points | 466 commentspage 6
mperham 12 hours ago|
SQLite? Luxury! My servers use CSV files for the database. (It’s actually true)
ewams 5 hours ago||
How do you handle billing / payment processing?
ilikestarcraft 14 hours ago||
One thing that I noticed was the mention of Claude 3.5 Sonnet or GPT-4o as cutting-edge models when the blog was written 25 days ago. This sadly makes me suspect that this was written by a LLM instead of a person...
stavros 16 hours ago||
Forget about the tech stack, how do I get multiple $10k MRR companies?
peter_retief 13 hours ago||
AWS is not value for money, I do have a DO account that is great but my development is mostly hosted locally with tunnels from cloudflare, it is remarkable how far you (I) can get with that setup.
lamasery 13 hours ago|
Last I saw, AWS has way better peering agreements than DO. Lots of problems with terrible throughput and lots of dropped packets for various clients (in several cities in North America, not just overseas or in the middle of nowhere) that vanished instantly on switching to AWS (including overseas ones that were also having problems)

Unfortunately, this isn’t something that shows up on spec sheets when you’re choosing a service. :-/

Capricorn2481 12 hours ago||
Well where does it show up? This is the first I've heard of this. Any source?
lamasery 9 hours ago||
Source was we used it, and that's what we saw, ~20% of clients on three continents (about half in North America) consistently had terrible connectivity to DO (not none, but it was really bad) and we spent a lot of time trying to fix it. Vanished through nothing but shifting that to AWS. It was clearly DO's peering network.

You probably won't see this unless both the following are true for your situation:

1) You have a workload that makes this issue noticeable. Long-lived connections and large transfer sizes make it more likely you'll notice. Loading 20kb of static html over the connection likely won't seem to have any problems (unless you run repeated trials and network analysis tools). Of course, modern websites can be pretty large...

2) Your users are long-term enough and in communication with you so these issues can even be noticed in the first place. Also helps if they're technical. If you're not hearing the story and aware of the situation on the other end of the line, all you see is a slow connection, could be anything causing it, and there are plenty of them for reasons that have to do with things closer to the client's end.

So all e.g. an e-commerce site might see is a somewhat higher bounce rate than necessary (due to some fraction of their users experiencing the site like it's on a somewhat-jittery ISDN line) without even knowing they're leaving money on the table because they likely have no way of even being aware of the problem.

[EDIT] Yes, we tried shifting around a bunch of ways on DO's side trying all kinds of ways to fix this, I'm quite sure it wasn't that we were unlucky with our hardware draw there or just one of their datacenters had this problem. It was something past the edge of their network.

ilikestarcraft 14 hours ago||
One thing that I noticed was the mention of Claude 3.5 Sonnet or GPT-4o when the blog was written 25 days ago. This sadly makes me suspect that this was written by a LLM instead of a person...
niedbalski 12 hours ago||
Truth has been told.
44za12 19 hours ago||
I read it as an article in defence of boring tech with a fancier/clickbaity title.

Here’s the more honest one i wrote a while back:

https://aazar.me/posts/in-defense-of-boring-technology

dvfjsdhgfv 18 hours ago|
While I agree with your points, this one could be more nuanced:

> Infrastructure: Bare Server > Containers > Kubernetes

The problem with recommending a bare server first is that bare metal fails. Usually every couple of years a component fails - a PSU, a controller, a drive. Also, a bare metal server is more expensive than VPS.

Paradoxically, a k3s distro with 3 small nodes and a load balancer at Hetzner may cost you less than a bare metal server and will definitely give you much better availability in the long run, albeit with less performance for the same money.

sgarland 15 hours ago||
In 5 years of running 3x Dell R620s 24/7 - which were already 9 years old when I got them - I had two sticks of RAM have ECC errors, and one PSU fail. The RAM technically didn’t have to be replaced, but I chose to. The PSU of course had a hot spare, so the system switched over and informed me without issue.

IME, hardware is much more reliable than people think.

zmmmmm 19 hours ago||
Can anybody validate this Github Copilot trick for accessign Opus 4.6? Sounds too good to be true.
specproc 18 hours ago||
I'm not what I'd call a heavy user, but I've also mainly been using Copilot in VS Code on the basic sub.

You do get Opus 4.6, and it's really affordable. I usually go over my limits, but I'm yet to spend more than 5 USD on the surcharges.

Not seen a reason to switch, but YMMV depending on what you're doing and how you work.

brushfoot 17 hours ago|||
Longtime happy Copilot user here. It's true.

The pricing is so good that it's the only way I do agentic coding now. I've never spent more than $40 in a month on Opus, and I give it large specs to work on. I usually spend $20 or so.

nesk_ 18 hours ago||
It is true, it's the official pricing of GitHub Copilot.
rzzzt 17 hours ago||
Why is GitHub sticking to per-request pricing when other providers switched to per-token for the high performing models?
Supermancho 5 hours ago|||
More likely loss leader to market capture. Not unusual for MSFT. XBox One, Razor and Blade, etc.
sumedh 14 hours ago|||
Maybe MS wants people to use Co Pilot.
jjjggggggg 14 hours ago|
Where do you get your eh-trade.ca stock price data? Given the licensing fees, that seems like one of the greater challenges of bootstrapping anything with market data.
More comments...