ssh to applicant@register.public.outband.net
instructions at https://www.public.outband.net note that it's ip6 only.
It is pretty pointless, nobody needs or wants a unix shell account in this day and age. But I had fun setting it up, it started as an exercise to see what a shared multiuser postgres install would look like and got a little out of control. My current project is getting a rack of raspberry pi's(6 of them in a cute little case) hooked in as physical application nodes.
I do. But I do not need just any Unix shell account, I need old and weird ones! I develop and maintain a portable utility (rlwrap) that is aimed at users of older software, who are often also using older or even obsolete systems.
For years, I used Polarhome (http://www.polarhome.com/) as a "dinosaur zoo" of obsolete systems (thans, Zoltan!) For every new release, building it on a creaky Solaris or HP-UX machine would expose a few bugs.
Because older systems are being replaced by (much more uniform) newer ones, there is a diminishing need for such extreme portability. This is also the reason that Polarhome closed in 2022.
In spite of this, testing on many different systems improves general code quality, even for users of mainstream systems like linux, BSD or OSX.
Of course, I could setup a couple of virtual machines, but that is a lot of hassle, especially for machines with uncommon processor architectures.
> I do. But I do not need just any Unix shell account, I need old and weird ones! I develop and maintain a portable utility (rlwrap) that is aimed at users of older software
Thank you, personally. I've used it in several contexts not just old systems, for example rlwrap is recommended with Clojure (okay, perhaps that's a comparatively small audience).
a powerpc xserve (running OSX server)
a sparc box (on solaris)
an alpha box (on either VMS or Digital Unix)
a pa-risc box (hp-ux)
a modern power box (Rocky or AIX)
an itanium box (running either VMS or NT depending on what the alpha is running)
a pi cluster (plan 9)
and a commodity x86 server (running OpenBSD, FreeBSD, Debian, Hurd, Redox, Serenity, reactos, and AROS).
and make a MOAP (mother of all pubnixes). if anyone has any hardware they'd like to donate, get in contact :)
I have a Sparc, Alpha, NextStation, and SGI in my collection. I'd like to add an AIX system, ideally with PowerVM/LPAR support. I used to work at a place that built everything on AIX (this was 20+ years ago) and the virtualization functionality was pretty neat.
Unless it's a super fun hobby for you, I wouldn't plan on this being very fun after the first dozen random crashes.
Maybe in the modern age someone could make a "polarhome in a box" that offers a similar gamut of systems, but via preconfigured emulators that you can simply download and run.
Until now, I have used qemu (or rather qemu-system-aarch64 in combination with binfmt-misc) on Linux to emulate e.g a Raspberry pi running on arm64. This works very well, but for e.g. Solaris or HP-UX there is the extra hurdle of getting hold of bootable media that will not freak out in the unfamiliar surroundings of a qemu virtual machine.
I have never tried, and it is possible that I overestimate the difficulty...
KVM (x86 and x86_64): Linux, BSD, OSX, Hurd, Haiku, MSDOS, Minix, QNX, RTEMS, Xenix, Solaris, UnixWare, Windows 95 through 11.
QEMU (for non-x86): AIX 4, Linux (m68k, arm, sparc, powerpc, mips, riscv), OSX (ppc), Solaris 8 (sparc), SunOS 4.1.4 (sparc), Windows NT 4 (mips)
SIMH (for old DEC computers): NetBSD, VMS, Ultrix, RSX-11M, RT-11
Some of them can be quite finicky to get to work. Xenix was especially hard.
Solaris 11 is quite easy to get running in QEMU/KVM though. You can download the media from Oracle.
The only real hardware I routinely run has either Debian Linux, macOS, or Raspberry Pi.
This is not true at all. I have been a member of SDF for over 15 years now and I use it all the time. Most recently, when HostPapa tried to tell me my sftp issue was on my end, and I told them that I was able to recreate the problem from the west coast and the east coast; my home on the east coast and SDF on the west coast. Finally they listened and fixed my issue ... that was on THEIR side, not mine. I like having the ability to compute from different parts of the country, as it lets me do things like that.
If you are not feeling like watching a long series, I recommend checking out Macross Plus, from the author of Cowboy Bebop and Samurai Champloo
The series is known as Robotech in the USA. The original series is not available legally in the USA to my knowledge but should be available on Japanese blu rays with english subtitles or on your favorite Linux ISO sharing website. The rest of the entries are on Disney+ or the aforementioned websites.
He's an absolutely kind soul who is deeply interested in all kinds of retro projects. I wish there were more folks like him in tech generally
(Disclaimer: I'm an exhibitor. So I'd love more attendees!)
Somehow I still remembered most of the shell syntax in a book I read about it probably in 2001. Don't ask me ... I don't know how either.
Got bored in about 10 minutes but still, another box checked off!
https://www.pearsonhighered.com/assets/samplechapter/0/2/0/5...
The slowest would be the 11/725, which was a cost-reduced 11/730 that had a reduced clock speed and half of the bus slots filled with epoxy to limit expansion. The 11/725 was so slow that using it was an act of masochism; It was slower than your 11/23+.
Those models were pretty rare though. Even though they were cheaper than an 11/750 the performance drop from the 750 to the 730 was too severe to justify even the reduced cost. If that were all then maybe replacing PDP-11s being used in industrial applications might have saved it but the 730 was still too expensive versus the existing PDP-11 products, and the 725's limited expansion made it less attractive than those same PDP-11 products. The PDP-11 thus outlived both the 725 and the 730.
The PDP series brought us Unix and GNU, and the VAX was the only mainframe capable of competing with IBM. DEC was the largest terminal manufacturer (they made the vt100 and vt220. if you've ever run a terminal emulator, chances are it's emulating one of those or a machine that did). They created CP/M (and by extension DOS). DEC is very well known
Side note: here's my workflow for running Plan 9 on Windows:
I regularly visit and enjoy reading the phlogs of their members as well.
Just a question to HN: should I wait more, try again? Or should I simply publish the vulnerabilities somewhere? If yes, where? It's my first time that I found a vulnerability at my own, not sure how to deal with that.
Their plate is already quite full and they operate a whole universe of services, so cut them some slack.
It's not an ordinary service which is exposed to internet trying to turn a profit. They run SDF, two Mastodon instances, a mail server, a Git server, trying to salvage/keep alive living computer museum (SDF Vintage Systems), etc. etc.
I agree with you that the social downtime is bad. People just won't use the service.
SDF welcomed everyone openly during the initial Mastodon waves, so it was all very Eternal September.
If you're joining to make a spare account to participate with SDF people, awesome! But if you want it as your identity for all of Fedi, I think that would be a bad experience. I ended up getting my own MastoHost account for a while and it was a vastly better experience, until I burned out on Fedi.
SDF is a super fun place to experiment with Gopher though. I absolutely recommend getting your own Gopherhole on SDF. It's like the old Geocities days but in ASCII. (And make sure you grab Lagrange as your GUI Gopher / Gemini client. I liked Phetch as my terminal Gopher client.)
We've completed our first phase of database clean up, thank you for your patience. The impact on performance was heavy, but it was a necessary step. All active users and their posts, profile, connections and media will be migrated to the new servers. Once that has been completed, any remaining data will stay online for further migration and clean up. Our instance is nearly 10 years old of constant daily operation, but we ran into a migration wall which held us back on 4.1.x. Now that it is deprecated, we will do our best to jump to the latest version rather than migrate through. Your support and patience has been greatly appreciated.You can't have it both ways: if it's not a big deal, then he can publish it.
If you say "Don't publish", then you acknowledge that it's a big deal.
I say to GP: "Congrats for finding a shell escape, it's always a big deal. But don't publish it... Yet".
Give them a chance to fix it. But it they don't even answer to the emails, even just saying: "thx we're busy we can't fix right now but will do", then at some point you just publish.
It doesn't take long to answer an email saying "thanks, we'll fix it eventually".
If they can't commit to a hard timeline of less than a few days, then publish. What happens next is not your fault - it was inevitable anyway.
Edit for clarity: This is just in general, not specifically SDF or small orgs or large orgs. The internet does not care about the difference. The internet just does not care period. Nobody is going to give anyone else any breaks, and especially not a botnet.
You can do a lot with S9 Scheme and the Unix API/syscalls it supports.
But the whole thing is: if you can escape as non verified user, than you can mass automate it to do ddos etc...
Perhaps just run "bash -c 'stress --cpu 64 ; echo fix your shell escape'"l " or something like that.
Some security practices sometimes feels like someone stabbing you just to prove you could be stabbed. Then they point at the wound and say: "See? You should be more careful."
Yes, the risk is real, but creating harm to demonstrate it isnt the same as protecting people.
If I ever experienced something like that, I'd be banning the person (or limiting their resources drastically) for 60 to 90 days to bring the impact of this matter to their attention.
Anything affecting users on a system is not harmless.
Very cool how they tried to move and preserve many of the living computer museum’s computers before Paul Allens sister could sell them all off. https://wiki.sdf.org/doku.php?id=vintage_systems:lcml_collec...
I remember seeing the TOAD systems when I visited in 2016 long before they closed, it’s very sad that people no longer get to experience computer history in person the same way.