Top
Best
New

Posted by necubi 10/23/2024

Arm is canceling Qualcomm's chip design license(www.bloomberg.com)
622 points | 450 commentspage 3
lincpa 10/23/2024|
[dead]
daeros 10/23/2024||
I hope Qualcomm wins and wins its countersuits too.
daeros 10/23/2024|
[flagged]
dang 10/23/2024|||
We've banned this account for repeatedly breaking HN's guidelines and ignoring our request to stop.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.

f1shy 10/23/2024|||
Apple sheep?
daeros 10/23/2024||
[flagged]
black_13 10/23/2024||
[dead]
snvzz 10/23/2024||
Bloomberg disappoints by failing to mention RISC-V at all in the entire article.

They have to be doing this deliberately, as it's hard to explain otherwise.

duxup 10/23/2024|
If the comments in here are correct, RISC-V is really not an option at this time due to performance.
snvzz 10/23/2024||
>due to performance

That would require pretending Ventana Veyron V2, Tenstorrent Ascalon/Alastor, SiFive P870, Akeana 5000-series and others do not exist or do not yet have any customers.

Pretending, because they actually exist, have customers, and are thus bound to show up in actual products anytime now.

duxup 10/23/2024||
I don’t feel like you addressed the performance issue.

I don’t think anyone said they don’t exist.

talldayo 10/23/2024||
Could be the best thing that's ever happened for RISC-V!
bhouston 10/23/2024||
Most people do not realize how slow RISC-V is right now. Yes, it will definitely get better, but it will take some time given how far behind it is.

Like 30x slower than a top of the line Apple Mx series CPU. Maybe there is a high performing RISC-V chip out there but I haven't yet run into one.

RISC-V benchmarks: https://browser.geekbench.com/search?q=RISC-V. Compare to an Apple M4 benchmark: https://browser.geekbench.com/v6/cpu/8224953

That said, RISC-V is good for embedded applications where raw performance isn't a factor. I think no other markets are yet accessible to RISC-V chips until their performance massively improves.

qiqitori 10/23/2024|||
There is a chip out there that contains both an ARM and a RISC-V core, the RP2350. It's reasonable to assume that the ARM part and RISC-V part are manufactured in the same process. There are some benchmarks pitting the two against each other on e.g. this page: https://forums.raspberrypi.com/viewtopic.php?t=375268

For a generic logic workload like Fibo(24), the performance is essentially the same (quote from above page):

    Average Runtime = 0.020015 Pico 2
    Average Runtime = 0.019015 Pico 2 RiscV
Note that neither core on the RP2350 comes with advanced features like SIMD.
bhouston 10/23/2024|||
It is true you can find slow ARM chips. But you cannot find fast RISC-V chips.
talldayo 10/23/2024||
I wager that statement would be turned on it's head if we restricted the comparison to chips of similar transistor density. Fast ARM chips do exist, as ARMv8 designs fabbed on 5nm TSMC with noncompliant SIMD implementations. If there were RISC-V chips in the same vein as Ampere or Nvidia's Grace CPU, I don't see any reason why they couldn't be more competitive than an ARM chip that's forced to adhere directly to the ISA spec.

RISC-V hedged it's bet by creating different design specs for smaller edge applications and larger multicore configurations. Right now ARM is going through a rift where the last vestiges of ARMv6/7 support is holding out for the few remaining 32-bit customers. But all the progress is happening on the bloating ARMv9 spec that benefits nobody but Apple and Nvidia. For all of ARM's success in the embedded world, it would seem like they're at an impasse between low-power and high-power solutions. RISC-V could do both better at the same time.

05 10/23/2024|||
It's a Cortex-M33, a 32bit microcontroller core with no virtual memory. Are we really comparing microcontrollers to modern aarch64 processors?
delfinom 10/23/2024||
Yes? Because nobody has released a RISC-V MPU comparable to what you perceive as "moden" arm64 MPUs.

RISC-V is simply a ISA and not a core. The ISA affects some of the core architecture but the rest is also implementor specific. High-end cores will take time to reach market. Companies with big guns like Qualcomm can most likely pump out if they wanted to, and will most likely be doing so in the future since they are pumping over $1 billion into the effort.

kimixa 10/23/2024||
How you design a core is very different based on if you're targeting ultra-low-power tiny microcontroller designs vs high performance and high power laptop/desktop-tier designs.

And it's not been proven that RISC-V is a good match for the second group (yet).

Remember it's sometimes very non-obvious what quirks of an ISA might be difficult until you actually try to implement it - one of the reasons ARM had a pretty much "clean sheet" rewrite in ARMv8 is things like the condition codes turned out to be difficult to manage in wide superscalar designs with speculative execution - which is exactly the sort of thing required to meet the "laptop-tier" design performance requirements.

It may be they've avoided all those pitfalls, but we don't really know until it's been done.

hajile 10/23/2024||||
We're not quite there yet. A bunch of mission critical stuff like SIMD were only added in the last 2-3 years. As it takes 4-5 years to design/ship high-performance chips, we still have a ways to go.

Ventana Veyron looks interesting. Tenstorrent's upcoming 8-wide design should perform well.

Qualcomm pitched making a bunch of changes to RISC-V that would move it closer to ARM64 and make porting easier, so I think it's an understatement to say that they are considering the idea. If ISA doesn't matter, why pay tons of money for the ISA?

shash 10/23/2024|||
There were two competing SIMD specs, and personally I'm glad that RVV won out over PSIMD. It's an easier programmer's view and fewer instructions to implement.
snvzz 10/24/2024||
RVV was not going to lose this one. RVV's roots run deeper than RISC-V's.

RISC-V was created because UC Berkeley's vector processor needed a scalar ISA, and the incumbents were not suitable. Then, it uncovered pre-existing interest in open ISAs, as companies started showing up with the desire for a frozen spec.

Legend is that MIPS quoted UC Berkeley some silly amount. We can thank them for RISC-V. Ironically, they ended up embracing RISC-V as well.

bhouston 10/23/2024|||
It is much more than just SIMD.

I think RISC-V chips in the wild do not do things like pipelining, out-of-order, register renaming, multiple int/float/logic units, speculation, branch predictors, smart caching.

I think all existing RISC-V chips in the wild right now are just simplistic in-order processors.

hajile 10/23/2024|||
You are wildly mistaken here.

Back in 2016, BOOMv1 (Berkley Out-of-Order Machine) had pipelining, register renaming, 3-wide dispatch, branch predictor, caches, etc. A quick google seems to indicate that it was started in 2011 and had taped out 11 times by 2016 (with actual production apparently being done on IBM 45nm).

They are on BOOMv3 now.

shash 10/23/2024||||
Almost all in-order processors will do pipelining, so that's there. Many are even multi-issue. Andes has an out of order core [1] and so does SiFive (though I don't know of many actual chips using these.

[1] https://www.andestech.com/en/2024/01/05/andes-announces-gene... [2] https://www.sifive.com/cores/performance-p650-670

avvvv 10/23/2024|||
Could patents be one of the reason why? Genuine question.
IshKebab 10/23/2024||||
You're confusing the ISA with the chip. Current RISC-V chips are slower than high performance ARM ones, but that's because you don't start by designing high performance chips! You start with small embedded cores and work your way up.

Exactly the same thing happened with ARM. It started in embedded, then phones, and finally laptops and servers. ARM was never slow, they just hadn't worked up to highly complex high performance designs yet.

wmf 10/23/2024||
you don't start by designing high performance chips! You start with small embedded cores and work your way up.

I disagree. For example, the first PowerPC was pretty fast and went into flagship products immediately. Itanium also went directly for the high end market (it failed for unrelated reasons). RISC-V would be much better off if some beastly chips like Rivos were released early on.

snvzz 10/23/2024|||
The high end requires specifications that were not available until RVA22 and Vector 1.0 were ratified. First chips implementing these are starting to show up as seen in e.g. MILK-V Jupiter, which is one of the newest development boards in the market.

With the ISA developed in the open, the base specs microcontrollers can target would naturally tend to be be ratified first, and thus microcontrollers would show up first. RVA22+V were ratified November 2021.

With the ISA developed inside working groups involving several parties, some slowness would be unavoidable, as they all need to agree on how to move forward. Thus the years of gap until RVA22+V from the time the privileged and unprivileged specs were ratified (2019).

RVA23 has just been ratified. This spec is on par with x86-64 and ARMv9, feature-wise. Yet hardware using it will of course in turn take years to appear as well.

bhouston 10/23/2024||
Isn't the problem the lack of advanced features for executing the current ISA with speed? I thought RISC-V chips seen in the wild do not pipelining, out-of-order, register renaming, multiple int/float/logic units, speculation, branch predictors, multi-tier caching, etc. The lack of speed isn't really related to a few missing vector instructions.
monocasa 10/23/2024||
There's a lot of cores that do all of that.

Most cores are pipelined; it is RISC after all.

There are quite a few superscalar cores, even a c906 is superscalar.

The c910/c920 is an OoO, renaming core, with speculation.

What they're lacking is area and power. A ROB with six entries is not going to compete with a ROB of six hundred entries.

IshKebab 10/23/2024|||
The first PowerPC chip was introduced in 1992! Itanium was in 2001, wasn't a from-scratch design and was famously a disaster!

Not really comparable.

tightbookkeeper 10/23/2024|||
Is it slower by design or just because it’s implementations have not been aggressively optimized?
bhouston 10/23/2024||
I believe it is just lacking aggressive optimization. ARM is basically RISC as well, so it isn't an architectural limitation.
tightbookkeeper 10/23/2024||
(Naive question) then is it really a big hurdle for a company who knows how to make arm chips to try making riscv chips?
svnt 10/23/2024|||
The question is more of will your customers agree to go along with this major architectural shift that sets you back on price-performance-power curves by at least five years and moves you out of the mainstream of board support packages, drivers, and everything else software-wise for phones.

Also we should not pretend that ARM is just going to sit there waiting for RISC-V to catch up.

bhouston 10/23/2024|||
> The question is more of will your customers agree to go along with this major architectural shift that sets you back on price-performance-power curves by at least five years and moves you out of the mainstream of board support packages, drivers, and everything else software-wise for phones.

Embedded is moving to RISC-V where they have low performance needs.

One example is the Espressif line of CPUs - which have shipped over 1B units. They have moved most of their offerings to RISC-V over the last few years and they are very well supported by dev tools: https://www.espressif.com/en/products/socs

tightbookkeeper 10/23/2024|||
Yes. This caveat is most clear. I am wondering about the question of performance raised in this thread,
1123581321 10/23/2024||||
It’s not a big hurdle so long as they hire Jim Keller to rapidly improve yet another architecture. (Only half-joking.)
topspin 10/23/2024||
Jim Keller is actually working on this right now at Tenstorrent.
snvzz 10/23/2024||
Ascalon (claiming Zen5-tier performance) is done, and has been available for licensing for about a year now.

It was then made public that LG bought a license right away.

saagarjha 10/23/2024||
It’s easy to claim a lot of things.
snvzz 10/23/2024||
>It’s easy to claim a lot of things.

It certainly is easy to casually spread fear and doubt.

But it is really far-fetched to think that the people at Tenstorrent, who have successfully delivered very high performance microarchitectures in other companies before, are lying about Ascalon, and that LG is helping them do that.

It would even be more far fetched to claim that Ventana Veyron V2, SiFive P870, Akeana 5000-series, all of them available high performance IP, are lying about performance.

saagarjha 10/23/2024||
[flagged]
snvzz 10/23/2024||
[flagged]
saagarjha 10/23/2024||
[flagged]
snvzz 10/23/2024||
[flagged]
saagarjha 10/23/2024||
[flagged]
bhouston 10/23/2024||||
I suspect not? I think the principles and methods of optimization are the same.

But I say this as a software guy who doesn't actually know CPU design.

dagmx 10/23/2024||||
Making the chip isn’t an issue for them. It’s the software compatibility post facto.
bluGill 10/23/2024|||
Well you need several years to catch up - and those doing arm are not standing still. Same problem big software rewrites have, some are successful but it takes a large investment while everyone is still using the old stuff that is better for now.
dagmx 10/23/2024|||
Maybe fine for Android but this will set their windows plans back another decade if it happens

It has taken them that long to make arm be a thing on windows and that’s building on people porting stuff to arm for Mac to finally get momentum.

RISC-V with windows will be an eternity to be feasible.

numpad0 10/23/2024|||
Just something I as a random person been thinking, how likely is next version of Windows _not_ going to be something Linux-based with WINE+Bochs preinstalled?

Windows branding is now forever tied with x86/x64 Win32 legacy compatibility, meanwhile WSL had captured back a lot of webdevs from Mac. Google continues to push Chrome, but Electron continues to grow side by side. Lots of stuff happening with AI on Linux too, with both Windows and Mac remaining to be consumer deployment targets. Phone CPUs are fast enough to run some games on WINE+Bochs.

At this point, would it not make sense for MS to make its own ChromeOS and bolt-on an "LSW"?

dagmx 10/23/2024||
I think it’s almost certain that Microsoft will not be changing their kernel to Linux.

I think you’re over estimating what percentage of users use WSL. They’re an insignificant fraction of the user base.

And with games, I think you’re also overestimating how good the translation layers like Proton are, and how rapidly Microsoft advance DX as well.

snvzz 10/23/2024|||
>RISC-V with windows will be an eternity to be feasible.

Will it now?

Microsoft was already deeply involved in 2021 as per that years' summit RISC-V Foundation's technical talks. Ztso was pushed by them.

dagmx 10/23/2024||
Whether Microsoft has windows running on an architecture is a very different level from whether it’s feasible to use it as a daily driver on windows. The ecosystem is what matters for most people.

Windows for arm hails back to 2011. They’re only just now getting native arm ports for several major packages. That’s ~13 years for a well established architecture that’s used much more universally than RISC-V. They don’t even have arm ports for lots of software that has arm ports on macOS.

RISC-V will take an aeon longer to get a respectable amount of the windows ecosystem ported over.

snvzz 10/23/2024|||
>The ecosystem is what matters for most people.

Absolutely agree.

The key development Microsoft has demonstrated recently is the ability to run x86 Windows software in non-x86 Windows systems.

Now that this is in place -and will only get better-, there is no longer a chicken and egg situation.

Instead, what we have is a clearly defined path to migrate away from x86.

Atotalnoob 10/23/2024|||
Arm on windows may date to 2011, but it was mostly a side project with 1-2 maintainers. With sufficient investment, it shouldn’t take 13 years to build up RISC-V support.
snvzz 10/23/2024|||
This.

It is evident to anybody paying attention that Microsoft has RISC-V support well underway.

But even if they had to start from scratch, it would be much easier, thanks to ARM having paved the way.

MBCook 10/23/2024|||
Like everything else, it doesn’t matter much. Windows ran on Itanium, Alpha, and as pointed out ARM for over a decade.

Without the ISVs, it’s a flop for consumers.

MS has had an abysmal time getting them to join in on ARM, only starting to have a little success now. Saying “Ha ha, just kidding, it’s RISC-V now” would be a disaster. That’s the kind of rug pull that helped kill Windows Mobile.

Emulators aren’t good enough. They’re a stop gap. Unless the new chip is so much better than the old it’s faster with emulation then the old one was native no one will accept it long. Apple’s been there, but that’s not where MS sits today.

And if your emulator is too good, what stops ISVs from saying “you did it for us, we don’t have to care”? So once again they don’t have to do it at all and you have no native software.

MS can’t drop their ARM push unless they want to drop all non-x86 initiatives for a long time.

snvzz 10/23/2024||
>And if your emulator is too good, what stops ISVs from saying “you did it for us, we don’t have to care”? So once again they don’t have to do it at all and you have no native software.

x86 emulation enables adoption.

Adoption means having an user base.

Having an user base means developers will consider making the platform a target.

>Saying “Ha ha, just kidding, it’s RISC-V now” would be a disaster.

Would it now? If anything, offering RISC-V support as well would further reinforce the idea that Windows is ISA-independent, and not tied to x86 anymore.

numpad0 10/23/2024|||
Switching CPU architecture is not about changing a compilation option, it's about eliminating centuries old assembly codes, binaries, and third party components and re-engineering everything to be self hosted on-prem at the company. Commercial software companies are reckless and stupid lazy and unbelievably inept, so lots of them won't be able to do this, especially for the second time.

In case this translation was needed at all. The point is the point is not a "-riscv" compilation option.

MBCook 10/23/2024|||
> If anything, offering RISC-V support as well would further reinforce the idea that Windows is ISA-independent, and not tied to x86 anymore.

Anymore? It’s been independent since the 90s. It’s only ISVs that have been an issue.

And a rug pull is a fantastic way to scare all the ISVs far far away.

6SixTy 10/24/2024|||
You sure? Microsoft dropped Alpha, MIPS, and PowerPC by the time Windows 2000 rolled around. Beyond that point, only the Xbox 360 and Itanium versions had anything different to the usual X86/64 offering.
MBCook 10/25/2024||
While there was only one popular choice, they’ve always kept it flexible. That was a core design decision.

But, as an example, Windows Phone 8 and later were based on the NT kernel. You already mentioned the 360.

snvzz 10/23/2024|||
Whose's rug would even be pulled?
dagmx 10/23/2024|||
How do you imagine ARM having helped?
monocasa 10/23/2024||
The x86 emulator for one.
dagmx 10/23/2024||
Fair, though I don’t think translation is a good long term strategy. You need native apps otherwise you’re always dealing with a ~20-30% disadvantage.

The competition isn’t sitting still either and QC already hit this with Intel stealing their thunder with Lunar Lake. They’re efficient enough that the difference in efficiency is far overshadowed by their compatibility story.

Ecosystem support will always go to the incumbent and this would place RISC-V third behind x86 and ARM. macOS did this right by saying there’s only one true way forward. It forces adoption.

snvzz 10/23/2024||
>You need native apps

For native apps, you need users. For users, you need emulation.

It cannot be overstated how important successful x86 emulation is for the migration to anything else to be feasible.

dagmx 10/23/2024||
I think you just ignored the rest of my comment though which specifically addresses why I don’t think just relying on translation is an effective strategy. Users aren’t going to switch to a platform that has lower compatibility when the incumbent has almost as good efficiency and performance.
snvzz 10/23/2024||
>when the incumbent has almost as good efficiency and performance.

The incumbent is the only two companies -Intel and AMD- that can make x86 hardware.

The alternative is the rest of the industry.

Thus having a migration path should be plenty on its own.

Intel and AMD can both join by making RISC-V or ARM hardware themselves. My take is that they will too, eventually, come around. Or they'll just disappear from relevance.

dagmx 10/23/2024||
The incumbent is not just x86 but now ARM as well.

You have to think in network effects. You mention “the rest of the industry” yet ignore that it’s mostly arm , which would make arm the incumbent.

x86 is the king for windows. But ARM has massive inroads with mobile, and now desktop with macOS, and servers with Amazon/Nvidia etc

There’s a lot better incentive to support ARM than RISC-V for software developers. It isn’t one or the other , but it is a question of resources.

Intel and AMD seem fine turning x86 around when threatened as can be seen by Lunar Lake and Stryx Point. Both have been good enough to steal QC’s thunder. You don’t think ARM manufacturers will do the same to RISC-V?

TBH most of your arguments for RISC-V adoption seem to start from the position that it’s inevitable AND that competing platforms won’t also improve.

dagmx 10/23/2024|||
So which of Microsoft’s false starts would you take as them taking ARM seriously?

Why do you think they’d take RISC-V any more seriously than their previous attempts at ARM?

There are two fallacies to overcome here.

dietr1ch 10/23/2024||
I think it's already a great thing for RISC-V, imagine things somehow go well for Qualcomm, do you really think they wouldn't prepare a plan B given ARM tried to get them out of the market?
nahnahno 10/23/2024||
I don’t think they have a plan B. Architectures take half a decade of work. Porting from risc-v to arm is not a matter of a backup plan, it’s that of a very costly pivot.
brucehoult 10/23/2024|||
Qualcomm have a Plan B.

This time last year they were all over the RISC-V mailing lists, trying to convince everyone to drop the "C" extension from RVA23 because (basically confirmed by their employees) it was not easy to retrofit mildly variable length RISC-V instructions (2 bytes and 4 bytes) to the Aarch64 core they acquired from Nuvia.

At the same time, Qualcomm proposed a new RISC-V extension that was pretty much ARMv8-lite.

The proposed extension was actually not bad, and could very reasonably be adopted.

Dropping "C" overnight and thus making all existing Linux software incompatible is completely out of the question. RISC-V will eventually need a deprecation policy and procedure -- and the "C" extension could potentially be replaced by something else -- but you wouldn't find anyone who thinks the deprecated-but-supported period should be less than 10 years.

So they'd have to support both "C" and its replacement anyway.

Qualcomm tried to make a case that decoding two instruction widths is too hard to do in a very wide (e.g. 8) instruction decoder. Everyone else working on designs in that space ... SiFive, Rivos, Ventana, Tenstorrent ... said "nah, it didn't cause us any problems". Qualcomm jumped on a "we're listening, tell us more" from Rivos as being support for dropping "C" .. and were very firmly corrected on that.

eqvinox 10/23/2024|||
> Dropping "C" overnight and thus making all existing Linux software incompatible is completely out of the question.

For general purpose Linux, I agree. But if someone makes Android devices and maintains that for RISC-V… that's basically a closed, malleable ecosystem where you can just say "f it, set this compiler option everywhere".

But also, yes, another commenter pointed out C brings some power savings, which you'd presumably want on your Android device…

brucehoult 10/25/2024|||
Qualcomm can do whatever they want with CPUs for Android. I don't care. They only have to convince Google.

But what they wanted to do was strip the "C" extension out of the RVA23 profile, which is (will be) used for Linux too, as a compatible successor to RVA22 and RVA20, both of which include the "C" extension.

If Qualcomm wants to sponsor a different, new, profile series ... RVQ23, say ... for Android then I don't have a problem with that. Or they can just go ahead and do it themselves, without RISC-V International involvement.

brucehoult 10/25/2024|||
Qualcomm can do whatever they want with CPUs for Android. I don't care. They only have to convince Google.

But what they wanted to do was strip the "C" extension out of the RVA23 profile, which is (will be) used for Linux too, as a compatible successor to RVA22 and RVA20, both of which include the "C" extension.

If Qualcomm wants to sponsor a different, new, profile series ... RVQ23, say ... for Android then I don't have a problem with that. Or they can just go ahead and do it themselves, without RISC-V International involvement.

wmf 10/23/2024||||
Dropping "C" overnight and thus making all existing Linux software incompatible is completely out of the question.

Android was never really Linux though.

refulgentis 10/23/2024|||
This is officially too much quibbling, even if we settled philosophical questions like "Is Android Linux?" Then "If not, would dropping C make RISC nonviable", there isn't actually an Android version that'll do RISC anywhere near on the horizon. Support _reversed_ for it, got pulled 5 months ago
snvzz 10/23/2024||
>Support _reversed_ for it, got pulled 5 months ago

Cursory research will yield that this was a technicality with no weight in Google's strong commitment to RISC-V Android support.

refulgentis 10/23/2024||
If you trust PR (I don't, and I worked on Android for 7 years until a year ago) - this is a nitpick 5 levels down -- regardless of how you weigh it, there is no Android RISC-V
snvzz 10/23/2024||
[flagged]
refulgentis 10/23/2024||
There is no Android RISC-V. There isn't an Android available to run on RISC-V chips. There is no code to run on RISC-V in the Android source tree, it was all recently actively removed.[1]

Despite your personal feelings about their motivation, these sites were factually correctly relaying what happened to the code, and they went out of their way to say exactly what Google said and respected Google's claim that they remain committed, with 0 qualms.

I find it extremely discomfiting that you are so focused on how the news makes you feel that you're casting aspersions on the people you heard the news from, and ignoring what I'm saying on a completely different matter, because you're keyword matching

I'm even more discomfited that you're being this obstinate about the completely off-topic need for us all to respect Google's strong off-topic statement of support[2] over the fact they removed all the code for it

[1] "Since these patches remove RISC-V kernel support, RISC-V kernel build support, and RISC-V emulator support, any companies looking to compile a RISC-V build of Android right now would need to create and maintain their own fork of Linux with the requisite ACK and RISC-V patches."

[2] "Android will continue to support RISC-V. Due to the rapid rate of iteration, we are not ready to provide a single supported image for all vendors. This particular series of patches removes RISC-V support from the Android Generic Kernel Image (GKI)."

shash 10/23/2024|||
Actually, https://github.com/google/android-riscv64?tab=readme-ov-file

And the mailing list is pretty active too: https://lists.riscv.org/g/sig-android/topics?sidebar=true

refulgentis 10/23/2024||
I don't know why people keep replying as if I'm saying Android isn't going to do RISC-V.

I especially don't understand offering code that predates the removal from tree and hasn't been touched since. Or, a mailing list, where we click on the second link and see a Google employee saying on October 10th "there isn't an Android riscv64 ABI yet either, so it would be hard to have [verify Android runs properly on RISC-V] before an ABI :-)"

That's straight from the horses mouth. There's no ABI for RISC-V. Unless you've discovered something truly novel that you left out, you're not compiling C that'll run on RISC-V if it makes any system calls.

I assume there's some psychology thing going on where my 110% correct claim that it doesn't run on RISC-V today is transmutated to "lol risc-v doesn't matter and Android has 0 plans"

I thoroughly believe Android will fully support RISC-V sooner rather than later.

snvzz 10/23/2024|||
[flagged]
dzaima 10/23/2024|||
Here are the actual commits that all the fuss was about: https://android-review.googlesource.com/c/kernel/build/+/306... and those at https://android-review.googlesource.com/q/topic:%22ack_riscv...

It's certainly more than just disabling a build type - it's actually removing a decent bit of configuration options and even TODO comments. Then again, it's not actually removing anything particularly significant, and even has a comment of "BTW, this has nothing to do with kernel build, but only related to CC rules. Do we still want to delete this?". Presumably easy to revert later, and might even just be a revert itself.

refulgentis 10/23/2024||||
[flagged]
saagarjha 10/23/2024|||
[flagged]
weebull 10/23/2024|||
Dropping the compressed instructions is also a performance / power issue. That matters to mobile.
eggsome 10/23/2024|||
What do you mean by Plan B? From what you've just said it sounds like their proposal was rejected, so there is no plan b now?
brucehoult 10/23/2024||
They can roll their sleeves up and do the small amount of work that they tried to persuade everyone else was not necessary. And I'm sure they will have done so.

It's not that hard to design a wide decoder that can decode mixed 2-byte and 4-byte instructions from a buffer of 32 or 64 bytes in a clock cycle. I've come up with the basic schema for it and written about it here and on Reddit a number of times. Yeah, it's a little harder than for pure fixed-width Arm64, but it is massively massively easier than for amd64.

Not that anyone is going that wide at the moment. SiFive's P870 fetched 36 bytes/cycle from L1 icache, but decodes a maximum of 6 instructions from it. Ventana's Veyron v2 decodes 16 bytes per clock cycle into 4-8 instructions (average about 6 on random code).

cesarb 10/23/2024||
> Yeah, it's a little harder than for pure fixed-width Arm64, but it is massively massively easier than for amd64.

For those who haven't read the details of the RISC-V ISA: the first two bits of every instruction tell the decoder whether it's a 16-bit or a 32-bit instruction. It's always in that same fixed place, there's no need to look at any other bit in the instruction. Decoding the length of a x86-64 instruction is much more complicated.

Narishma 10/23/2024||
Why do they use two bits for it? Do they plan to support other instruction lengths in the future?
brucehoult 10/23/2024|||
So that there are 48k combinations available for 2-byte instructions and 1 billion for 4-byte (or longer) instructions. Using just 1 bit to choose would mean 32k 2-byte instructions and 2 billion 4-byte instructions.

Note that ARMv7 uses a similar scheme with two instruction lengths, but using The first 4 bits from each 2-byte parcel to determine the instruction length. It's quite complex, but the end result is 7/8 (56k) 2-byte instructions are possible and 1/8 (512 million) 4-byte instructions.

IBM 360 in 1964 thru Z-System today also uses a 2-bit scheme to choose between 2-byte instructions with 00 meaning 2-bytes (16k instructions available), 01 or 10 meaning 4-bytes (2 billion instructions available), and 11 meaning 6-bytes (64 terra instructions available).

cesarb 10/23/2024|||
> Why do they use two bits for it?

To increase the number of 16-bit instructions. Of the four possible combinations of these two bits, one indicates a 32-bit or longer instruction, while the other three are used for 16-bit instructions.

> Do they plan to support other instruction lengths in the future?

They do. Of the eight possible combinations for the next three bits after these two, one of them indicates that the instruction is longer than 32 bits. But processors which do not know any instruction longer than 32 bits do not need to care about that; these longer instructions can be naturally treated as if they were an unknown 32-bit instruction.

mkl 10/23/2024||||
Qualcomm has been working on RISC-V for a while, at outwardly-small scale. It's probably intended as a long-term alternative rather than a ready-to-go plan B. From a year ago: "The most exciting part for us at Qualcomm Technologies is the ability to start with an open instruction set. We have the internal capabilities to create our own cores — we have a best-in-class custom central processing unit (CPU) team, and with RISC-V, we can develop, customize and scale easily." -- https://www.qualcomm.com/news/onq/2023/09/what-is-risc-v-and..., more: https://duckduckgo.com/?q=qualcomm+risc-v&t=fpas&ia=web
hajile 10/23/2024||
Qualcomm pitched a Znew extension for RISC-V that basically removes compressed (16-bit) instructions and adds more ARM64-like stuff. It felt very much like trying to make an easier plan B for if/when they need/want to transition from ARM to RISC-V.

https://lists.riscv.org/g/tech-profiles/attachment/332/0/cod...

betaby 10/23/2024||||
I suppose Google ( + Samsung ) can bear that cost in the context of Android Arm -> Android RISC-V.
snvzz 10/23/2024||||
Qualcomm's been involved with RISC-V for several years now.

If anything, ARM is the plan B that they'll likely end up abandoning.

dagmx 10/23/2024||
It’s a bit much to say their primary product that they’ve done for decades is a plan B. By definition it cannot be a plan B if it’s executed first and is successful.

I think a lot of RISC-V advocates are perhaps a little too over eager in their perception of the landscape.

snvzz 10/23/2024||
Typo. Meant to write "ARM is the plan A that they'll likely end up abandoning".
gjsman-1000 10/23/2024||||
No kidding; and while RISC-V is a massive improvement, I hate to be the wet blanket, but RISC-V will not change signed boot, bootloader restrictions, messy or closed source drivers, carrier requirements, DRM implementation, or other painful day to day paper cuts. Open source architecture != Open source software and certainly != Open source hardware; no matter what the YouTubers think.
Atotalnoob 10/23/2024|||
Until arm or risc-v fix standardize the bootloader, it’s always going to be a big deal for each arm/risc device added anywhere…
snvzz 10/23/2024||
Relevant RISC-V specs were released years ago and implementations follow them.

I know of no boards that have application processors and yet not implement SBI. Furthermore, everybody seems to be using the opensbi implementation.

ARM and RISC-V are not the same.

snvzz 10/23/2024|||
It would be naive to think that Qualcomm is only starting its RISC-V effort today and from scratch.
blurbleblurble 10/25/2024||
Meanwhile RISC-V slowly but surely picks up momentum.
orev 10/25/2024|
It’s getting extremely tiresome to see RISC-V comments in every thread about this. It’s unnecessary and irrelevant.
blurbleblurble 10/25/2024||
It's arguably the most glaring element of background context I can imagine, there's a reason people are mentioning it. Just because it's not ready to compete right now, in the medium-long term it's looking like a flat out alternative to ARM. ARM wants their slice of the money now because this decade could easily end up being peak ARM times. Sell high.
ddingus 10/23/2024||
Damn!

So what happens to the Raspberry Pi?

Edit: OK, following the discussion now. Nothing in the short term, potentially longer term.

lights0123 10/23/2024||
The Raspberry Pi uses Broadcom, not Qualcomm chips. It also uses cores designed by Arm, which are not affected by today's news.
ddingus 10/24/2024||
Yeah this is a total botch for me.

That is what I get for posting tired.

Crosseye_Jack 10/23/2024||
Broadcom (The makers of the chips used on the pi) did have a bid to acquire Qualcomm back in 2017 [0] but the bid was withdrawn after Trump blocked the deal.

So nothing will happen to the Pi (Arm also has a a minority stake in Raspberry Pi)

[0] https://investors.broadcom.com/news-releases/news-release-de...

Joel_Mckay 10/23/2024||
Next week, Qualcomm will likely announce a 64 core RISC-V RVA23.

ARM really shouldn't pursue an aggressive posture with lines outside iOS or Win11 ecosystems. The leverage won't hold a position already fractured off a legacy market. =3

SG- 10/23/2024||
you think they can just flip a RISC-V switch and keep all the performance instantly? I can't really understand the logic from some people here.
mrweasel 10/23/2024|||
People also seem to forget that everything needs to be ported. If you're an Android manufacturer, you're not going to stop shipping phones, waiting for the Android RISC-V to catch-up to ARM, or for RISC-V to get the speed and features of current ARM CPUs. You're going to buy ARM processors from another vendor.

The Windows ARM port is going to take even longer, I doubt that Microsoft has that working at anything beyond research stage, if that.

Getting the RISC-V ecosystem up to par with ARM is going to take years.

If you want to spin this in RISC-Vs favor, then yes, forcing a company like Qualcomm to switch would speed things up, but it might also give them a bit of a stranglehold on the platform, in the sense that they'd be the dominant platform, with all of their own customisations.

Joel_Mckay 10/23/2024||
In theory, the Raspberry Pi foundation could easily move 3 million 1.8GHz RVA23 in 1 quarter... with 64 cores + DSP ops it wouldn't necessarily need a GPU initially. =3
mrweasel 10/23/2024||
The Raspberry Pi community would probably jump on a RISC-V board, but that doesn't help Qualcomm or it's customers.
Joel_Mckay 10/23/2024||
Manufacturers adapt quickly to most architectural changes.

If you are running in a posix environment, than porting a build is measured in days with a working bootstrap compiler. RISC-V already has the full gcc and OS available for deployment.

We also purchase several vendors ARM products for deployment. Note, there was a time in history, when purchasing even a few million in chips would open custom silicon options.

Given how glitched/proprietary/nondeterministic ARM ops are outside the core compatibility set, it is amazing it was as popular as the current market demonstrates.

Engineers solve problems, and ARM corporation has made themselves a problem. =3

Joel_Mckay 10/23/2024|||
"keep all the performance instantly"

Depends what you mean by performance (most vendors ARM accelerated features are never used for compatibility reasons), as upping the core count with simpler architecture is a fair trade on wafer space.

i.e. if ARM is using anti-trust tactics to extort more revenue, that budget is 100% fungible with extra resources. Note, silicon is usually much cheaper than IP licenses.

One can ask polity to explain things without being rude to people. Have a wonderful day =3

kmeisthax 10/23/2024|||
Funnily enough Qualcomm tried to persuade RISC-V to let them drop compressed instructions. Presumably because they're trying to crowbar a RISC-V decoder onto the Nuvia design and compressed instructions are breaking it somehow.
Joel_Mckay 10/23/2024||
They should buy the intel alumni founded RISC-V startup, and pour resources into a RVA23 based chip with dual on-chip SDR asic sections (they have the IP).

i.e. create a single open-chip solution for mid-tier mobile communication platforms.

They won't do this due to their cellular chip line interests. However, even if it just ran a bare bones Linux OS... it would open up entire markets. =3

delfinom 10/23/2024||
With how poorly Intel is doing, not sure "Intel alumni" is a plus. Lmao.
Joel_Mckay 10/23/2024||
Indeed, the Intel installed base has enough market inertia to last a business cycle or two with AMD.

Even with the recent silicon defects... People will tolerate the garbage as they want the NVIDIA+Intel performance.

Architecturally speaking, there were better options available... just never the equivalent price over performance of consumer grade hardware. =)

chx 10/23/2024||
1. I am fairly sure games and other performance sensitive apps are using the Android NDK which is not available for RISC-V.

2. I am fairly sure a competitive RISC-V CPU is not days or weeks but years away.

svnt 10/23/2024|||
> 2. I am fairly sure a competitive RISC-V CPU is not days or weeks but years away.

And chasing a moving target fueled by the largest technology companies on the planet.

Joel_Mckay 10/23/2024||
Tying products to Googles ecosystem is usually financially risky. Not a good long-term strategy for startups. =3
Joel_Mckay 10/23/2024|||
I think it is more of a "chicken and egg" ordering problem.

1. The RISC-V design standard fragmentation issue has been addressed.

2. A reasonable mobile class level SoC will be available for integration after any large production run of the chips.

If ARM forces #2 out of silliness, than it also accelerates #1 in the market.

In general, there is plenty of use-cases even if a chip is not cutting edge. =3

initramfs 10/25/2024|
ARM is owned by an investment bank, SoftBank. It operates kind of like Goldman Sachs. ARM is becoming a chipmaker, just like Intel. But Intels' CHIP grant is more similar to a pre-emptive 2008-era bailout of the banks (TARP), for being "too big to fail." (because it makes defense chips) https://irrationalanalysis.substack.com/p/arms-chernobyl-mom...
alephnerd 10/25/2024|
ARM is not becoming a foundry. Almost no one wants to become a foundry because the margins are too low.

This is why they are subsidized for tens of billions of dollars by countries all over the world.

initramfs 10/25/2024||
I understand ARM is not becoming a foundry (at least anytime soon, which I will explain in a second). First, I said they are becoming a chipmaker like Intel because they have two of three things to make chips- an architecture, and physical core IP (POP) https://www.arm.com/products/silicon-ip-physical/pop-ip. They are making chips at all three foundries, but obviously don't own physical fabs. Softbank has 46 trillion in assets, with over 57.8 billion in operating income. What's stopping Softbank from making an offer to buy a majority stake in a Japanese foundry such as Rapidus (2nm) and other EUV equipment, such as Lasertec? https://semiwiki.com/forum/index.php?threads/shared-pain-sha... Ultimately, one starts to question, whether Qualcomms's interest in producing more consumer laptop chips is really competitive with the offerings by AMD and Intel, and whether this is really the best use of foundry space when foundries producing chips could be one day used against an amphibious assault on Formosa.

My point is that the commercial lawsuits between Qualcomm and ARM are just one part of a larger geopolitical issue- x86 lost the mobile market 20 years ago- the only reason Intel is surviving is national security- they could have been bankrupt had they not been propped up by a pre-emptive Defense Production Act. Consumers benefit because now they have the choice between more ARM software and x86 products, but I think that is just a short term benefit. Eventually the architecture cuts off support for old software, such as x86-32 bit, and now with X86S, they are only supporting 64 bit. So in the long term, it's better to have options. WINE was developed because of a fear of repeating the Irish Potato Famine ( a mono culture: https://gitlab.winehq.org/wine/wine/-/wikis/Importance-of-Wi...), in economic terms. In other words, just because Intel might not want to sell 32 bit chips anymore, doesn't mean others might not want/need to use some application that only exists on one platform (and all the engineers retired- with lost code/unported code).

There's an AI bubble: https://www.marketwatch.com/story/the-ai-bubble-is-looking-w... When a number of companies get investments and start to produce chips that add little extra value- slightly faster chips in lower power with 10 different architectures than run windows 11, then there is less justification to continue investing in companies that do not produce interesting new hardware because the end result is that they are being shaped by windows 11, rather than a unique feature- automotive efficiency for in car apps, sure, but laptops that run Oryon that are hard to boot linux aren't any more interesting than an x86S processor that can only boot linux 6.8 etc.