Top
Best
New

Posted by bumbledraven 15 hours ago

I am building a cloud(crawshaw.io)
849 points | 432 commentspage 5
tlb 11 hours ago|
I think clouds pay a huge abstraction penalty to allow tiny VMs. I guess it helps with onboarding and $10 personal VPNs. But I have never needed a fraction of a computer. I want to rent some number of full computers of various sizes, consisting of CPU, memory, and flash disk. Hetzner is closer than AWS, and I think/hope that’s what Crawshaw is aiming for.
phrotoma 9 hours ago||
Allow? I understood tiny VM's to be something (at least AWS) added to try to squeeze more utilization out of idle hardware.
tlb 8 hours ago||
I understand the appeal from AWS's perspective. Customer A pays for a 32 vCPU VM, which they run on 32-core hardware. Then they can also squeeze in customer B's 1 vCPU instance running a blog, and no one notices. Free money!

But I don't want to be either of those customers. It means the whole system has an extra layer of abstraction, so they can juggle VMs around. It's why you need slow EBS instead of just getting a flash drive in the same case as the CPU, with 0.01x the latency.

ButlerianJihad 9 hours ago||
The key to renting a fraction of a computer is scaling up. If I can rent 1/8th of a computer, I can also rent 3/8ths and 1/2 and then go to a full computer, if that capacity is necessary.

The key to scaling up is to have big-enough hardware on the backend. If Hetzner is renting out bare metal instances then they can only rent out the sizes that they have. If a cloud provider invests in really big single systems, they can offer fractions of those systems to multiple tenants, some of whom scale up to use the entire system, and some who don't. I think that is a win-win.

A fractional VM is also a fungible VM. If the tenant calls to spin up a certain size VM, then the backend can find suitable hardware for it from a menu of sizes. Smaller VMs can slot in anywhere there is room, not just on a designated bare-metal system.

A cloud provider is always going to want to maximize their rack space, wattage/heat, and resource usage. So they will invest in high-density systems at every chance. On the other hand, cloud tenants will have diverse needs, including some fraction of those big computers.

tee-es-gee 11 hours ago||
I will follow this one for sure. There are a few more companies with the extremely ambitious goal of "a better AWS", and I am interested in the various strategies they take to approach that goal incrementally.

A service offering VMs for $20 is a long way from AWS, but I see how it makes sense as a first step. AWS also started with EC2, but in a completely different environment with no competition.

dzonga 6 hours ago||
finally a cloud 'vendor' that understands that modern computers are fast.

if we go back to the principle that modern computers are really fast, SSDs are crazy fast

and we remove the extra cruft of abstractions - software will be easier to develop - and we wouldn't have people shilling 'agents' as a way for faster development.

ultimately the bottleneck is our own thinking.

simple primitives, simpler thinking.

satnhak 8 hours ago||
AWS. Months of complex dev work to build using their CDK. Terrible disk speed. Frustrating permissions systems. Tiny deployments that take 30 minutes. Rollbacks that get stuck for hours. What you end up with is about 4 CPUs and 16Gb of RAM for $1000+ per month. No wonder Bezos could send his wife and Katie Perry on a jolly into space. The world's richest man 1 IOP at a time.

For that money I can get 5 big bare metal boxes on OVH with fast SSDs, put k0s on them, fast deploy with kluctl, cloudflare tunnels for egress. Backups to a cheap S3 bucket somewhere. I'll never look at another cloud provider.

BirAdam 8 hours ago|
If you're using cloudflare tunnels, you don't even need to be on OVH. You could seriously host anywhere, like your own basement.
qaq 13 hours ago||
With LLMs there is no real dev velocity penalty of using high perf. langs like say Rust. A pair of 192 Core AMD EPYC boxes will have enough headroom for 99.9% of projects.
kennywinker 12 hours ago|
That’ll be true for the 0.1% of project that were limited by the speed of their programming language. For the other 99.9% of projects their vibe coded rust can fly and their database, network, or raw computation will still be the bottleneck.

(Percentages cited above are tongue-in-cheek, actual numbers are probably different)

_nhh 11 hours ago||
just take a look at hetzner cloud. Its everything 99% of the people need, good pricing. Convert that ux to terminal and you done
ianberdin 5 hours ago||
“Everything is shit. Believe me. We will do something better, just believe me.”

Jokes aside: - k8s is insane peace of software. A right tool for a big problem. Not for your toys. Yes, it is crazy difficult to setup and manage. Then what?

- cloud has bad and slow disk. BS. They have perfectly fast NVME.

Something else? That’s it.

Why I am so confident? I used to setup and manage kubernetes for 2 years. I have some experience. Do I use it more? Nope. Not a right tool for me. Ansible with some custom Linux tools fits better for Me.

I also build my own cloud. But if I say it less loud: hosting to host websites for https://playcode.io. Yea, it is hard and with a lot of compromises. Like networking, yes I want to communicate between vms in any region. Or disks and reliability. What about snapshots? And many bare metal renters gives only 1Gbt/s. Which is not fine. Or they ask way more for 10Gbt uplink. So it is easy to build some limited and unreliable shit or non scalable.

47872324 14 hours ago||
exe.dev. 111 IN A 52.35.87.134

52.35.87.134 <- Amazon Technologies Inc. (AT-88-Z)

crawshaw 8 hours ago||
Hello, author here.

Our exe.dev web UI still runs on AWS. We also have a few users left on our VM hosts there, as when we launched in December we were considering building on AWS. Now almost all customer VMs are on other bare metal providers or machines we are racking ourselves. We built our own GLB with the help of another vendor's anycast network. You can see that if you try any of the exe.xyz names generated for user VMs.

We would move exe.dev too, but we have a few customers who are compliance sensitive going through it, so we need to get the compliance story right with our own hardware before we can. It is a little annoying being tied to AWS just for that, but very little of our traffic goes through them, so in practice it works.

skybrian 13 hours ago|||
Their first location (PDX) is on Amazon I believe and not accepting new customers. They’ve said it’s much more expensive for them than the others. Their other locations are listed here:

https://exe.dev/docs/regions

MagicMoonlight 13 hours ago|||
Well yes, because they needed high availability and flexibility and tons of features…

Hey wait a minute!

awhitty 14 hours ago||
"I am white labeling a cloud"
transitorykris 13 hours ago||
FTA “Hence the Series A: we have some computers to buy.”
import 14 hours ago||
Article doesn’t really tell what fundamental problems will be solved, except fancy VM allocation. Nothing about hardware, networking, reliability, tooling and such. Well, nice, good luck.
synack 10 hours ago|
Have we already forgotten about the NSA's "SSL added and removed here! :)" slide that Snowden showed us?

https://news.ycombinator.com/item?id=6641378

stingraycharles 10 hours ago|
I don’t understand the point you’re trying to make.

Cloud is bad?

synack 10 hours ago|||
Nevermind, I misread their HTTPS proxy documentation. Cloud is fine.
More comments...