Posted by bitbrewer 17 hours ago
Neither are a fit, SDRAM was a Pentium/K-6 standard (PC66); the DIMMs ran faster than a non-OC'ed 486 bus, which ran at half the clock of the CPU. 486 "natural fit" would be FPM or EDO, if you wanted to be era-correct.
There were probably some off the wall 486 motherboards back then that supported SDR (post-1993...), but those would have been towards the very end of the 486 consumer life cycle. These did exist in the 486 era, where they had the option to run (or had an embedded) 386 using FPM while there was an open 486 socket and the option, but not requirement to run EDO.
Anyway, this is someone's project, so they can do whatever the heck they want.
But, a plain answer: Via Eden boards. still use north/southbridge architecture, and are from the mid 2000's.
It's just modern Windows/Linux that have discontinued the ability. Or, perhaps you have 16/32 and 32/64 and are unable to do 16bit on 64bit machines- which still boils down to "operating system."
By far the biggest issue though is that even the Via Eden processor is significantly faster than a 486- and lots of software (especially games) from that era used no-op instruction loops for timing and timers. This results in games like The Incredible Machine's level timer running out in half a second or less.
Linux isn't really relevant given the time frame.
Also- DOSBox is an emulator vs VMs are hardware, no? I suspect A VM won't fix the "no-op loop for timing" issue- with modern processors' lowest clock being 600-800Mhz before it gets C6/C7'd, 30 years of IPC improvement, and the possibility of the CPU itself optimizing such loops (I'm unsure for various reasons): I expect the UX of "just limit how many scheduler slices it gets" to be nasty.
They weren't even that bad considering the little power they needed.
Quite apart from the increased complexity, the most important difference is that there's a minimum speed as well as a maximum speed for modern DDR RAM, which means there's usually quite a narrow window of achievable clock rates when getting an FPGA to talk to DDR3.
I suspect that's why the author chose to use the DDR for video: It's usually easy to keep plain old SDRAM in lockstep with a soft-CPU, since you can run it at anything between 133MHz (sometimes even more) and walking pace, so there's no need to deal with messy-and-latency-inducing clock domain crossing.
Streaming video data in bursts into a dual-clock FIFO and consuming it on the pixel clock is a much more natural fit.
And just today, I received the Intel Edison dev kit that I'd purchased on eBay.
The Galileo is a Quark X1000 SoC, two P54C cores. In-order, original 32-bit Pentium.
https://en.m.wikipedia.org/wiki/Intel_Galileo
.
The Edison is a modern System On Module about the size of an SD card but about 3x the thickness of one. It's far more capable: dual 64-bit Silvermont Atom cores, Super scalar out of order. And an additional Quark core as a system monitor, running an independent RTOS. There's also 4GB eMMC, 1GB RAM, WiFi, and Bluetooth on the module. Its quite a remarkable curiosity.
Ten years ago, Intel tried to catch up to ARM in tablets and smartphones, but it was already too late, and this entire segment of Intel was cancelled within a year or two.
https://en.m.wikipedia.org/wiki/Intel_Edison
Next up is building more recent Linux images for these via the Yocto Project and the now cancelled Intel Board Support Packages (BSP).
If you like low power tiny systems, there's a strange amount of fun to be had.
What's the smallest SOC you could design to run DOOM? What power envelope would that consume (exclusing display/speakers/etc.) At that size and (optimized) transistor count, what speeds could we realistically achieve?
What would a massively-multicore (gpu-style with multi-hundreds or more of cores) one of these run like?
Every time I see a project like this, these thoughts run through my head.
> What's the smallest SOC you could design to run DOOM?
Depending on your definition of "modern", more than you think has been done. Intel's Quark were basically 486/Pentium hybrids but fabbed on a fairly modern (at the time) process. While Quark is no longer available as a standalone product, a derivative is part of every modern Intel processor in the form of the Intel ME system co-processor, and it's likely that a number of other Intel products (network cards, QAT accelerators, the ARC GPUs, etc) use them as system controllers as well (Quark essentially came into existence as a "formalization" of the multiple "micro-x86" implementations inside Intel being used as embedded controllers for various non-CPU products).
> What would a massively-multicore (gpu-style with multi-hundreds or more of cores) one of these run like?
This is close to what the original Xeon Phi was. Essentially 60-ish Pentium cores, with modern SMT and 512-bit vector units added. It worked ... OK? If the software development story had been better (e.g. actual first-class support in GCC) I think they could have been a much bigger success, but the need for ICC back in the ICC-costs-real-money days and initially very expensive hardware certainly held them back. At times I do miss some of their behavior.
Arguably a number of the RISC-V-based "AI accelerators" on the market are basically new spins on the same idea: a bunch of small cores, plus large vector/tensor units.