Posted by voxadam 3 days ago
This is a typo here, right? 1mm is thicker, not thinner, than 750 micrometers. I assume 1µm was meant?
https://www.cadence.com/en_US/home/resources/white-papers/th...
Its the holy grail of having thermal conductivity somewhere between aluminum and copper, while being as electrically insulating as ceramic. You can put the silicon die directly on it.
Problem is that the dust from it is terrifyingly toxic, but in it's finished form it's "safe to handle".
Nobody, presumably :)
Why mess with BeO when there is AlN, with higher thermal conductivity, no supply limitations and no toxicity?
Edit: I've just checked, practically available AlN substrates still seem to lag behind BeO in terms of thermal conductivity.
""" Aluminium nitride (AlN) is a solid nitride of aluminium. It has a high thermal conductivity of up to 321 W/(m·K)[5] and is an electrical insulator. Its wurtzite phase (w-AlN) has a band gap of ~6 eV at room temperature and has a potential application in optoelectronics operating at deep ultraviolet frequencies.
...
Manufacture
AlN is synthesized by the carbothermal reduction of aluminium oxide in the presence of gaseous nitrogen or ammonia or by direct nitridation of aluminium.[22] The use of sintering aids, such as Y2O3 or CaO, and hot pressing is required to produce a dense technical-grade material.[citation needed] Applications
Epitaxially grown thin film crystalline aluminium nitride is used for surface acoustic wave sensors (SAWs) deposited on silicon wafers because of AlN's piezoelectric properties. Recent advancements in material science have permitted the deposition of piezoelectric AlN films on polymeric substrates, thus enabling the development of flexible SAW devices.[23] One application is an RF filter, widely used in mobile phones,[24] which is called a thin-film bulk acoustic resonator (FBAR). This is a MEMS device that uses aluminium nitride sandwiched between two metal layers.[25] """
Speculation: it's present use suggests that at commercially viable quantities it might be challenging to use as a thermal interface compound. I've also never previously considered the capacitive properties of packaging components and realize of course that's required. Use of Al O as a heat conductor is so far outside of my expertise...
Could a materials expert elaborate how viable / expensive this compound is for the rest of us?
Because aluminum nitride is not as good as beryllia, packages with beryllia have survived for some special applications, like military, aerospace or transistors for high-power radio transmitters.
Those packages are not dangerous, unless someone attempts to grind them, but their high price (caused by the difficult manufacturing techniques required to avoid health risks, and also by the rarity of beryllium) discourages their use in any other domains.
Doesn't that mean it would be problematic for electronics recycling?
I can't help but wonder, where exactly is that heat supposed to go on the underside of the chip? Modern CPUs practical float atop a bed of nails.
A toroidal shape would allow more interconnects to be interspaced throughout the design as well as more heat-transfer points alongside the data transfer interconnects.
Something like chiplet design where each logical section is a complete core or even an SOC with a robust interconnect to the next and previous section.
If that were feasible, you could build it onto a hollow tube structure so that heat could be piped out from all sides once you sandwich the chip in a wraparound cooler.
I guess the idea is more scifi than anything, though. I doubt anyone other than ARM or RISC-V would ever even consider the idea until some other competitor proves the value.
Rip out all the special purpose bits that make it non-uniform, and thus hard to route.
Rip out all of the long lines and switching fabric that optimizes for delays, and replace it all with only short lines to the neighboring cells. This greatly reduces switching energy.
Also have the data needed for every compute step already loaded into the cells, eliminating the memory/compute bottleneck.
Then add a latch on every cell, so that you can eliminate race conditions, and the need to worry about timing down to the picosecond.
This results in a uniform grid of Look Up Tables (LUTS) that get clocked in 2 phases, like the colors of the chessboard. Each cell thus has stable inputs, as they all come from the other phase, which is latched.
I call it BitGrid.
I'd give it a 50/50 chance of working out in the real world. If it does, it'll mean cheap PetaFlops for everyone.
But neural networks are non-Von Neumann, and we 'program' them using backprop. This can also be applied to cellular automata.
Yes, gas centrifuge appears to be a leading method.
'The purification starts with “simple” isotopic purification of silicon. The major breakthrough was converting this Si to silane (SiH4), which is then further refined to remove other impurities. The ultra-pure silane can then be fed into a standard epitaxy machine for deposition onto a 300-mm wafer.'
https://www.eejournal.com/article/silicon-purification-for-q...
A rocket and a sandblaster at the same time.
"Chemistry of the Main Group Elements - 7.10: Semiconductor Grade Silicon"
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/...
Of course cost would have to be acceptable.
Isotopically pure diamond, now there's something to look at.
https://en.wikipedia.org/wiki/Isotopically_pure_diamond
"The 12C isotopically pure, (or in practice 15-fold enrichment of isotopic number, 12 over 13 for carbon) diamond gives a 50% higher thermal conductivity than the already high value of 900-2000 W/(m·K) for a normal diamond, which contains the natural isotopic mixture of 98.9% 12C and 1.1% 13C. This is useful for heat sinks for the semiconductor industry."
https://spectrum.ieee.org/silicon-quantum-computing-purified...
I wonder if we can actually use those heat for something useful.
First I'm hearing of this. Last I checked, air coolers had basically reached parity with any lower-end water cooled setup.
My guess is manufacturers don't want to tell people they should air cool if it requires listing specific models. It's easy to just say they recommend water cooling since basically all water coolers will provide adequate performance.
In my case two fans on the CPU, pointing towards the rear exhaust fan to suck, and 6 fans 120mm or larger pushing air through otherwise, will _hopefully_ remain sufficient.
That said, I think liquid cooling has reached critical mass. AIOs are commonplace.
I think it would be (uh) cool to have a extra huge external reservoir and fan (think motorcycle or car radiator plus maybe a tank) that could be nearly silent and cool the cpu and gpu.
Despite the fact that I think that it is very likely that a $40 cooler like the one mentioned by you would work well enough, when I will build a new computer with a top model AMD Ryzen CPU, which dissipates up to 200 W in steady state conditions, I will certainly buy a Noctua cooler for it. A computer with an Intel Arrow Lake S CPU would be even more demanding, as those can dissipate much more than 250 W in steady state conditions.
The reason is that by now I have the experience with many Noctua coolers that have been working for 10 years or more, even 24/7, with perfect reliability and ensuring low noise and low temperatures.
I am not willing to take the risk of experimenting with a replacement, so for my peace of mind I prefer the proven solutions, both for coolers and for power supply units (for the latter I use Seasonic).
Noctua knows that many customers think like this, so they charge accordingly.
But the noctua fans are reliable, but really quiet.
Your ears are worth it.
It’s still gonna be louder than a water cooler if that’s your primary concerns. Otherwise other air coolers are only marginally less silent and just as effective at half (if not less) the price.
Specifically the pumps. Those things have an obnoxious high-pitched whine that I personally find unbearable, especially during low/idle workloads.
It's possible that the actal dB level is lower, but the frequency and sound characteristics matters. A lot.
*Technically the truth
Sometimes the solution is worse than the problem. My favorite example is the TRS-80 Model II and its descendants, with the combination of the fan and disk drives so loud that users experience physical discomfort. <https://archive.org/details/80-microcomputing-magazine-1983-...>
- Inner voice: "You don't miss the old PC noises, you just miss those times".
- Shut up!
But this only simulates keyboard and mouse click sounds. In any case, you wrote "whenever you start a game or app" (my emphasis). The Model II's fan and drive noises are 100% present from start to finish, with the combination enough to drive users insane (or, at least, not want to use the $5-10,000 computer).
The Model 12 and 16 improved on the design, sporting Tandon "Thinline" 8" drives that ran on DC and spun down when not in use, leaving fan noise that was quite tolerable.
Even worse than the TDP was the fact that the 90 nm Pentium 4 had huge leakage current, so its idle power consumption was about half of the maximum power consumption, e.g. in the range 50 to 60 W for the CPU alone.
Moreover, at that time (2004) the cooler makers were not prepared for such a jump in the idle power consumption and maximum power consumption, so the only coolers available for Pentium 4 were extremely noisy when used with 90 nm Pentium 4 CPUs.
I remember when at the company where I worked, where we had a great number of older Pentium 4 CPUs, which were acceptable, we got a few upgrades with new Prescott Pentium 4. The noise, even when the computers were completely idle, was tremendous. We could not stand it, so we have returned the computers to the vendor.
A current AMD CCD is ~70mm² and can drop around 120 W or so on that area. E.g. the 9700X has one CCD and up to a 142W PPT, 20 W goes to the IOD, ~120 into the CCD.
edit: (1) this account/IP-range is limited to a handful of comments per day so I cannot reply directly, having exhausted my allotment of HN comments for today (2) I do not understand what you take offense at, because I did not "change [my] original argument" - you claimed, a P4 die is much smaller, I gave a counter example, and made the example more specific in response to your comment (by adding the "E.g. ..." bit with an example of a SKU and how the power would approximately split up).
Some coolers today still look like that but they're on chips drawing 35W or so while idling at <2W.
[1]: https://www.cpubenchmark.net/cpu.php?cpu=Apple+A18+Pro&id=62...
[2]: https://www.cpubenchmark.net/cpu.php?cpu=Intel+Pentium+4+3.7...