Posted by sandwichsphinx 5 days ago
Because of that, the act of reading the bit's value means that the data is destroyed. Therefore one of the jobs of the sense amplifier circuit - which converts the tiny voltage from the bit cell to the external voltage - is to recharge the bit.
But that stray capacitance is so small that it naturally discharges through the high, but not infinite, resistance when the transistor is 'off'. Hence, you have to refresh DRAM, by regularly reading every bit frequently enough that it hasn't discharged before you got to it. Usually you might only need to read every row frequently enough, because there's actually a sense amplifier for each column, reading all the bit values in that row, with the column address strobe just selecting which column bit gets output.
In fact it took a bit of time for the CPUs or memory controllers to do it automatically, i.e. without the programmer having to explicitly code the refresh.
(I think I more or less know, but I’d rather talk about it than look it up this morning.)
One bit of DRAM is just one transistor and one capacitor. Massive density improvements; all the complexity is in the row/column circuitry at the edges of the array. And it only burns power during accesses or refreshes. If you don't need to refresh very often, you can get the power very low. If the array isn't being accessed, the refresh time can be double-digit milliseconds, perhaps triple-digit.
Which of course leads to problems like rowhammer, where rows affected by adjacent accesses don't get additional refreshes like they should (because this has a performance cost -- any cycle spent refreshing is a cycle not spent accessing), and you end up with the RAM reading out different bits than were put in. Which is the most fundamental defect conceivable for a storage device, but the industry is too addicted to performance to tap the brakes and address correctness. Every DDR3/DDR4 chip ever manufactured is defective by design.
And this task soon moved to memory controllers, or at least got done by CPUs automatically without need for explicit coding.
Back when it needed to be explicit code, what exactly was the code doing? I tried to find some example of what it might look like online but search is so muddy.
DRAM has destructive reads and is arranged in pages. When you read from a page, the entire contents of the page are read into an SRAM buffer inside the memory chip, the bit(s) selected are written out to the pins, and then the entire contents of the SRAM buffer is written back into DRAM.
For old DRAM, usually half the bits in an address selected the page, and the other half selected the word from the page (actually, often a single bit, and this was extended to a full word by accessing multiple chips in parallel). Set your address lines so that the page address is in the low order bits, and any linear read of 2^(log2(DRAM chip size)/2) length is sufficient to refresh all ram. Many early computers made use of this to do the refresh as a side effect; as an example, IIRC the Apple 2 was set up so that the chip updating the screen would also refresh the ram.
I think the long and short of it is that DRAM is cheap. DRAM needs one transistor per data bit. Competing technologies needed far more. SRAM needed six transistors per bit for example.
Dennard figured out how to vastly cut down complexity, thus costs.
What’s the likely ETA for DRAM?
DRAM uses a capacitor. Those capacitors essentially hit a hard limit at around 400MHz for our traditional materials a very long time ago. This means that if you need to sequentially read random locations from RAM, you can't do it faster than 400MHz. Our only answer here is better AI prefetchers and less-random memory patterns in our software (the penalty for not prefetching is so great that theoretically less efficient algorithms can suddenly become more efficient if they are simply more predictable).
As to capacitor sizes, we've been at the volume limit for quite a while. When the capacitor is discharged, we must amplify the charge. That gets harder as the charge gets weaker and there's a fundamental limit to how small you can go. Right now, each capacitor has somewhere in the range of a mere 40,000 electrons holding the charge. Going lower dramatically increases the complexity of trying to tell the signal from the noise and dealing with ever-increasing quantum effects.
Getting more capacitors closer means a smaller diameter, but keeping the same volume means making the cylinder longer. You quickly reach a point where even dramatic increases in height (something very complicated to do in silicon) give only minuscule decreases in diameter.
I built a little CPU in undergrad but never got around to building RAM and admit it’s still kind of a black box to me.
Bonus question: When I had an Amiga, we’d buy 50 or 60ns RAM. Any idea what that number meant, or what today’s equivalent would be?
When we moved from SDR to DDR1, latencies dropped from 20-25ns to about 15ns too, but if you run the math, we've been at 13-17ns of latency ever since.
There was briefly a product called vCage loading a whole secure hypervisor into cache-as-RAM, with a goal of being secure against DRAM-remanence ("cold-boot") attacks where the DIMMs are fast-chilled to slow charge leakage and removed from the target system to dump their contents. Since the whole secure perimeter was on-die in the CPU, it could use memory encryption to treat the DRAM as untrusted.
So, yeah, you can do it. It's funky.
When SMP first came out we had one large customer that wanted to manually handle scheduling themselves. That didn’t last long.
• 2Kbit SRAM Cache Memory for 15ns Random Reads Within a Page
• Fast 4Mbit DRAM Array for 35ns Access to Any New Page
• Write Posting Register for 15ns Random Writes and Burst Writes Within a Page (Hit or Miss)
• 256-byte Wide DRAM to SRAM Bus for 7.3 Gigabytes/Sec Cache Fill
• On-chip Cache Hit/Miss Comparators Maintain Cache Coherency on Writes
Afaik only ever manufactured by a single vendor Ramtron https://bitsavers.computerhistory.org/components/ramtron/_da... and only ever used in two products:
- Mylex DAC960 RAID controller
- Octek HIPPO DCA II 486-33 PC motherboard https://theretroweb.com/motherboards/s/octek-hippo-dca-ii-48...
https://jcmit.net/memoryprice.htm
Recently DDR4 RAM is available at well under $2/GB, some closer to $1/GB.
I don't think the latter (SRAM capacity remaining the same per area?) has anything to do with Dennard scaling.
Update: Today, marking the 56th anniversary...1966
Please forgive my pedantry but 58th. It was a busy year.
Going from 8MB to 32MB in the 90s is still comparable to going from 8GB to 32GB today.
One difference is just that the price isn't dropping at the same rate anymore [1], so it doesn't make as much sense to buy small and re-buy next year when bigger chips are cheaper (they won't be much cheaper).
Another is that DRAM speed is at the top of an S-curve [2], so there's not that same increase in speed year-over-year, though arguably the early 2000's were when speeds most dramatically increased.
[1] https://aiimpacts.org/trends-in-dram-price-per-gigabyte/
[2] http://blog.logicalincrements.com/2016/03/ultimate-guide-com...
This statement makes it difficult to believe you were there.
We maxed our Tandy (Radio Shack) 286 boxen, juiced to a blistering 8mhz, with 2.5mb RAM. I got blisters from stuffing RAM into daughter boards.
https://en.wikipedia.org/wiki/Expanded_memory#Expansion_boar...
Twas early years of PC-based image scanning and processing. We only had 1 image capture board. The boot's POST duration for 2.5mb was ridiculous. So after a cpature, my coworker (hi Tim Hall!) would ever so quickly yank the board out of a running box (now doing image processing) to be used in another box.
Multitasking!
As I remember it, the jump to 386 (much delayed by Micron's protectionist racket) was the next biggest step up.
8MB -> 32MB enabled entirely new categories of (consumer-level) software. It would be a game changer for performance as you were no longer swapping to (exceedingly slow) disk.
They simply are not comparable, imo. 8MB to 32MB was night and day difference and you would drool over the idea of someday being able to afford such a luxury. 8GB to 32GB was at least until very recently a nice to have for power users.
I remember a few years before that you'd zoom in on a drawing and do as much as you could without zooming back out because it would take a full minute to redraw the screen. And then another minute to zoom back in somewhere else.
In 1992 standard desktop was still 386 + 4MB, with highend 486 + 8MB. 1MB SIMM was $30-50. 4MB $150 January 1992, dropping to $100 in December 1992, and back to $130 in December 1994.
Afaik 72 pin simms were first introduced in 1989 IBM PS/2 (55SX? proprietary variant) and later around 1993 in clones. You could run 1,2,3 or 4 simms of any size independently. In December 1994 2MB 72pin Simm was $80, 4MB $150, 8MB $300, 16MB $560. 32MB $1200, 64MB $2800.
486DX2-66 itself was ~$300, + $100 VLB motherboard meanwhile $1100 got you Pentium 90MHz with PCI motherboard. In December 1994 for the price of 486 with 32MB ram ($1600) you could have bought P90 with 16MB ($1660).
Now my 8GiB Apple mini feels overworked and swaps a lot (quite noticeable so due to the spinning rust drive), while my laptop with 32GiB never breaks a sweat.
The real jump was when finally leaving behind my 8bit home computer (upgraded to 64+256KiB RAM of which most were usable only as RAM disk) when I got the 386 with 4MiB (soon maxed out at 8MiB). Now, that was a game changer.
Even most AAA games will still run on 8gb ram just fine.
The starting to point really does matter.
Resolutions are actually good example of things at low end(today) scale. 1920x1080 * 24 bits = 6220800 bytes. So 6 megs. Just to store 16 million colour screen state.
Normal video, not so much.
But yes, for resolution the angular pixel size is what matters and since VR covers more of your field of view you also need more pixels for the same results.
Going from the Pacific Ocean to the Seven Seas is still lots of water.
The feeling isn't the same even remotely...
In 1994, though, an 11 year old computer would already be considered vintage. In 1983 the hot new computer was the Commodore 64. In 1994 everyone was upgrading their computers with CD-ROM drives so they could play Myst.
It was definitely more of a curiosity and a toy rather than a serious computer in 1994.
And at that time i also learned how critical it was to check your ram for errors. I reinstalled win98 and windows 2000 so often until i figured this out.
I think Microsoft is making windows slow to prepare people for when they move everything on your computer to their cloud.
tinfoil hat off.
Currently sporting this - G.Skill Trident Z5 Neo RGB Black 64GB (2x32GB) PC5-48000 (6000MHz) with a 7800x3d.
Previous kit was G.Skill Trident Z Neo RGB 32GB (2x16GB), PC4-28800 (3600MHz) DDR4, 16-16-16-36 [X2, eventually, for 64 total] with, you guessed it, the 5800x3d, from the 3900xt - my son loves it.
I've actively shopped for low latency RAM - within reasons, but have paid good premiums especially in DDR4 days. For DDR5, there can be surprisingly little price wise to differentiate e.g. CL30 or CL32, so whilst it may not offer the greatest of differences, if you're already paying e.g. $350 (AUD) for a kit at CL32 the improved latency might just be $20 more at the same speed.
(I see that things have moved on a bit from last September when I did my last upgrade; now we have CL32 at higher speeds, so maybe that's the go to now.)
though i guess the 90's are _the_ best tech era by far and for some time to come, because that's where capable and modular computing machines became a real commodity.
And we are at 2024, i own a 4k lg oled display now for years. Why not leverage it? Just because 1080p is 'plenty'?
Netflix 4k is 15 Mbit.
So unless I see people mentioning the media, I am always weary of the comparison.
Nonetheless, i do think that compression from a high res source looks different / sharper.
As opposed to Netflix where you press a button and you're watching something.
People out there aren't much like the people who post here - when they get home from their crappy job they hate, finally make dinner because they can't afford food delivery, kick off their shoes on the old couch they bought at a yard sale and turn on their 15 year old TV they want it to just work. They don't have the interest or energy to fuck around with things that many here find fun and interesting and exciting.
This post is giving me major "why would you need Dropbox when you can rsync?" vibes: https://news.ycombinator.com/item?id=18255896
1: I'm using the term "HT setup" rather broadly, as the primary location in a residence for watching movies as a group; it includes e.g. people who don't own a TV and watch movies on their laptop sitting on a coffee table. Setups where the display covers over 40% of the FOV (where 4k definitely makes a difference) are somewhere in the top 5%.
For commercial purposes it's another story and it makes sense to consider shooting in 8K if possible, thus the option should exist.
Different use cases exist:
Record 8k text and you could zoom in and read things. Record 8k and crop withot quality loss or 'zoom' in
Does everyone need this? Probably not but we are on hn not at a coffee party
Honest question. I hope I learn something about studying minerals!
Trying to show something that is literally one pixel at 400x magnification at 1080p is no fun. Even a few more pixels helps.
On a large TV though , it's probably an improvement over 4K for sports where you need to track a small item moving fast.