Posted by xngbuilds 20 hours ago
So while the content is in RAM on the Pi, a lot of the heavier lifting (TLS termination) is done elsewhere, which saves a ton of CPU load on the Pi.
On the one hand I get it, TLS is pretty heavy, and it makes sense to take advantage of a VPS or Cloudflare or however you want to do it.
But once you are spinning up a VPS, the question is ... why the Pi? The VPS in the article has less RAM, but more storage. If you're already doing TLS termination on the VPS (the most RAM intensive part), you might as well just do the whole shebang there.
I know this is all for fun, I'm just wondering -- is the Pi Zero really too slow to handle TLS, especially with an optimized TLS library? In this setup, the Pi is already being directly exposed to the Internet anyway, there's no VPN being used. That ARM11 isn't "fast", but surely a 1 GHz ARM11 can handle an optimized TLS library serving some subset of TLS1.2.
What was supposed to be a cool achievement is rendered pointless when one of the key elements is offloaded elsewhere.
While I may make the argument that most are probably hosting and doing php on the same server, it's not the typical approach for any custom software at this point.
Edit: No, the article mentions listening on port 80 at home. I thought they'd be SSH tunneling or something. That is unusual, but I guess for a static website it doesn't really matter.
It sorta does matter. Either the actual raspi does nothing of value or the traffic has value that should be protected.
Sure, I heard the argument that public HTTP traffic does not need encryption but if it is of any value then both parties have a interest in it unmanipulated, uncenscored, validated or all of the before. Even if it is just preventing the ISP injecting dumb ads.
Also, all web pages are served from RAM. It's automatic that modern OSes will cache this stuff on first access.
I retired my 486 in ‘95 or thereabouts…
It had a second life doing stuff like delivering mail, handling IRC, serving web pages, and whatever else a few of us wanted from it. The performance was fine.
(The Pentium-ish machines stayed on desktop duty where GUIs devoured resources.)
Kind of irrelevant since operating systems and web pages in the 90's were significantly smaller in footprints, as the web was mostly plain text back then. Windows XP with its GUI would run Max Payne on 128MB of RAM. You could do a lot more back then that You can't do modern stuff like that today with 128MB of RAM.
Yesterday I one-shotted several interactive pages, that Qwen built out of straight HTML and Javascript. I handed it my API (source code, not even a swagger, via an MCP that Qwen wrote for me), asked for a frontend, and it delivered. One page at a time to keep context down, and mightve gotten lucky on the first draw but after the first one I told it to make the next ones like the first.
Can't say I've had that experience with backend languages & frameworks, incl writing that same API, but perhaps I'm off the beaten path with those, or perhaps there's greater breadth of things to do vs a narrower set of acceptance criteria? IDK.
Here I was sweating that I'd have to research and learn a current-day frontend framework. It felt like a magic wand using consumer-grade AI. HTML and plain old Javascript was plenty.
Tangent but apropos of other contemporary threads on HN, it puts a spin on supply chain threats. There's no NPM or anything, except perhaps whatever mysteries are baked into the model.
HTML code, CSS, Javascript, Images.
In this case, they are static elements, which can even be cached locally to share more easily.
If someone wants a massive build system to render a static HTML page, that's on them, and their personal interpretation. Increasingly, and maybe more often than not, there is more than one way to get the same outcome.
The fact that there's hundreds of downloads for a single web page is up to the constructor of that page. Still, these things can be reasonably cached. For example, host it on the Pi, then put a cloudflare in front of it or something.
The Pi Zero might not be for you, or easy to try to undermine. Which criticisms would go away if it was on a regular pi?
Maybe you misunderstood. Which criticism did I make of the pi zero? I criticized present day SW.
I found a fun bug with it a couple years ago: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
It is still able to build software faster than it is released. It takes roughly a month to recompile the entire system :D
For the radio stuff I can just take the Pi, frontend, and a battery pack outside to test.
When I finally move to a place with proper fiber internet I'm going to be hosting several side projects on a handful of Pis.
for the ones that say that the pi can't handle tls that's just stupid. that's trivial as well.
for the ones saying that you need a vps how cloud native are you people? you can just expose a port on your router (if you're brave enough) and have any dynamic dns service point to the correct ip address.
Running a mainstream website on a RPi Pico W is more advanced, but still not really challenging as long as the content is static
The point of failure for all of these machines has been the SD card. They seem to last 4 years almost to the day. I suppose if I set up a RAMdisk they might last longer, but honestly, for the price of an SD card it’s not really worth my time.
Today, you can run mailcow/mailu with all the options on a relatively modest vps. I'm on a cable provider that locks down residential customers and charges over 2x as much for business, so it's cheaper to use VPSes.
On RPi, I've mostly opted to use SSD + USB Adapters as they've been significantly more reliable that SD. There's lots of cases that make this configuration a breeze. That said, I've mostly been running Mini PCs since COVID when the RPi got to be more expensive all-in and slower.
OTOH, I corrupted a card by turning off the Pi in middle of writing.
I’m scared of self hosting a mailbox.
I don't send a lot of emails from it, but the ones I do are delivered.
There are a few open-source one-command mail server deployment solutions that do all of the heavy lifting for you. Some of them might even be pretty good. The problem with those is that if you don't understand how your mail server is put together, you're completely stuck if it breaks.
There are "Industrial" SD cards which should last considerably longer, you can look up a few people have done their own testing. They can be slower but that shouldn't be a blocker for an email server on a pi.
A Pi with Ethernet can truly boot diskless via TFTP. And later Pi4 and Pi5 can even boot directly from the internet by getting their initial "boot.img" FAT partition via HTTP from anywhere. That would be diskless.
a better way would be to boot via nvme SSD, ethernet boot has a dependency of network, what if you need to debug when network is down or debug the errors/bugs network itself ?
I run my micro-homelab on a Pi Zero from 2018. It’s behind Cloudflare tunnels. It runs the apps i need on a DietPi OS within 180MB and it’s uptime is ~8 months.