Posted by runningmike 3 days ago
> Once threads actually run concurrently, libraries (which?) that never needed locking (contradiction?) could (will they or won't they?) start hitting race conditions in surprising (go on, surprise me) places.
If you're short on time: the paper reads a bit dry, but falls in the norm for academic writing. The github repo shows work over months on 2024 (leading up to the release of 3.13) and some rush on Dec 2025 to Jan 2026, probably to wrap things up on the release of this paper. All commits on the repo are from the author, but I didn't look through the code to inspect if there was some Copilot intervention.
Race-to-idle used to be the best path before multicore. Now it's trickier to determine how to clock the device. Especially in battery powered cases. This is why all modern CPU manufacturers are looking into heterogeneous compute (efficiency vs performance cores).
Put differently, I don't think we should be killing ourselves over this at software time. If you are actually concerned about the impact on raw energy consumption, you should move your workloads from AMD/Intel to ARM/Apple. Everything else would be noise compared to this.
So if you want maximum energy efficiency, you should choose well your CPU, but a prejudice like believing that ARM-based CPUs are always better is guaranteed to lead to incorrect decisions.
The Apple CPUs have exceptional and unmatched energy efficiency in single-thread applications, but their energy efficiency in multi-threaded applications is not better than that of Intel/AMD CPUs made with the same TSMC CMOS fabrication process, so Apple can have only a temporary advantage, when they use first some process to which competitors do not have access.
Except for personal computers, the energy efficiency that matters is that of multi-threaded applications, so there Apple does not have anything to offer.
For applications that use vector or matrix operations and which may need some specific features, it is frequent to have a 4x to 10x better performance, or even more than this, when passing from a badly-designed ISA to a well-designed ISA, e.g. from Intel AVX to Intel AVX-512.
Moreover, there are ISAs that are guilty of various blunders, which lower the performance many times. For instance, if an ISA does not have rotation instructions, an application whose performance depends a lot on such operations may run up to 3x slower than on an ISA with rotation instructions
Even greater slow-downs happen on ISAs that lack good means for detecting various errors, e.g. when running on RISC-V a program that must be reliable, so it has to check for integer overflows.