Top
Best
New

Posted by Amorymeltzer 3 days ago

Boring is what we wanted(512pixels.net)
437 points | 267 commentspage 3
KolibriFly 2 days ago|
Soo we can't demand both stability and constant reinvention
stego-tech 2 days ago||
For those of us immersed in hardware fandom, the cycle is neither new or disappointing - if anything, a lot of us relish the “boring” times, because it means we can finally squeeze performance out of our investment without fretting about arbitrary replacement timelines or major improvements in technology leading to gargantuan gains. It’s nice, quiet, and let’s us enjoy the fruits of our labors and hobbies.

That being said, I do kind of head-tilt at the folks screaming that this sort of “boring” cycle of hardware isn’t sustainable, that somehow, someone must create the next major improvement to justify all new spend or otherwise this is a worthless exercise. In reality, it’s always been the opposite: Moore’s Law wasn’t infinitely scalable, and anyone who suffered through the Pentium 4 era was painfully aware of its limitations. Sure, we can find other areas to scale (like going from clock speed to core counts, and core counts to core types), but Moore’s Law is not infallible or infinite; eventually, a plateau will be reached that cannot be overcome without serious R&D or a fundamental sea-change in the marketplace (like moving from x86 to ARM), often a combination of both.

Apple, at least, has the unenviable position of being among the first in addressing this challenge: how do you sell more products when power or efficiency gains are increasingly thin, year over year? Their approach has been to leverage services for recurring revenue and gradually slowing down product refreshes over time, while tempering expectations of massive gains for those product lines seeing yearly refreshes. I suspect that will be the norm for a lot of companies going forward, hence the drive to close walled gardens everywhere and lock-in customers (see also the Android sideloading discourse).

The hardware cycle at present is fairly boring, and I quite like it. My M1 iPad Pro and M1 Pro Macbook Pro dutifully serve me well, and I have no need to replace either until they break.

gotschi_ 2 days ago||
I just want the old Macbook Air M1 design back :(
noname120 2 days ago||
What don’t you like about the new design?
DeltaCoast 2 days ago||
That’s what I am holding on to.
philipwhiuk 3 days ago||
What happened to the M3 GPU to give it a drop in score?
adastra22 3 days ago||
Same reason Asahi Linux only supports up to the M2. They completely re-did the system architecture.
wmf 3 days ago||
When you make a major architecture change (e.g. dynamic caching) there's always one or two workloads that get slower.
fullofdev 3 days ago||
Agree! very happy with the M4 performance.
LarsDu88 3 days ago||
Reading this on my brand new M5 Mac :)
kristianp 3 days ago||
This seems like a straw man. Are reviewers really calling the M5 boring?
bigyabai 3 days ago||
We want Apple to compete. When they stopped signing CUDA drivers, I thought it was because Apple had a competitive GPGPU solution that wasn't SPIR-V in a trenchcoat. Here we are 10 years later with SPIR-V in a trenchcoat. The lack of vision is pathetic and has undoubtedly cost Apple trillions in the past half-decade alone.

If you think this is a boring architecture, more power to you. It's not boring enough for me.

tel 3 days ago|
Genuine question, how does SPIR-V compare with CUDA? Why is SPIR-V in a trench coat less desirable? What is it about Metal that makes it SPIR-V in a trench coat (assuming that's what you meant)?
XorNot 3 days ago||
At this stage of the game what people want is CUDA. I just bought a new GPU and the only requirement I had was "must run reasonably modern CUDA".
p-o 3 days ago||
There might be a subset of people, such as yourself, that looks for CUDA as a hard requirements when buying a GPU. But I think it's fair to say that Vulkan/Spir-V has a _lot_ of investment and momentum currently outside of the US AI bubble.

Valve is spending a lot of resources and AFAIK so are all the AI companies in the asian market.

There are plenty of people who wants an open-source alternative that breaks the monopoly that Nvidia has over CUDA.

tracker1 3 days ago|||
I think the new AMD R9700 looks pretty exciting for the price. Basically a power tweaked RX 9070 with 32gb vram and pro drivers. Wish it was an option 6-7 months ago when I put my new desktop together.
adastra22 3 days ago|||
Great. I’m with you there. There is no way that’s describing Apple though.

They’re not open source, for sure. But even setting that aside, they don’t offer anything like CUDA for their system. Nobody is taking an honest stab at this.

nl 3 days ago||
Triton is a very compelling alternative to CUDA for many applications. Many people know it from the outstanding Unsloth kernels.

https://triton-lang.org/main/python-api/triton.language.html

Mojo has support for Apple Silicon kernels: https://forum.modular.com/t/apple-silicon-gpu-support-in-moj...

bongripper 2 days ago|
[dead]
More comments...