Posted by necubi 4 days ago
Which would not really affect the ecosystem of phones using Qualcomm arm chips, it would just change the margins / market cap of Qualcomm.
Yes, longterm Q might invest in their own RISC implementations, but I don't see a viable business case for Qualcomm to just stop ARM development for the foreseeable future.
Qualcomm is almost certainly ARM's biggest customer. If ARM loses, Qualcomm doesn't have to pay out. If ARM wins, Qualcomm moves to RISC-V and ARM loses even harder in the long-term.
The most likely outcome is that Qualcomm agrees to pay a slight bit more than they are currently paying, but nowhere near what ARM is demanding and in the meantime, Qualcomm continues having a team work on a RISC-V frontend for Oryon.
ARM Ltd wants to position itself as the ISA. It is highly proprietary of course, but the impression they want to give is that it is "open" and freely available, no lock-in, etc.
This really brings the reality back into focus that ARM controls it with an iron fist, and they're not above playing political games and siding against you if you annoy their favored customers. Really horrible optics for them.
IMO we need to question the premises of the current IP ecosystem. Obviously, the principles of open-source are quite the opposite to how ARM licenses IP. (Afaik, ARM also licenses ready-to-go cores, which is very different from what Q is getting.)
It's easy to see how RISC-V avoids the conflict of interest between owning the ISA and licensing specific implementations.
We’d just get a bunch of proprietary cores which might not even be compatible with each other due to extensions. Companies like Qualcomm would have zero incentives to share their designs with anyone.
ARM is not perfect but it at least guarantees some minimal equal playing field.
> Afaik, ARM also licenses ready-to-go cores
Which is the core of Qualcomm’s business. All their phone chips are based on Cortex. Of course ARM has a lot of incentives to keep it that way, hence this while thing.
No different than ARM. Apple has matrix extensions that others don't, for example.
The ecosystem (e.g., Linux and OSS) pressure will strongly encourage compatible subsets however. There is some concern about RISCV fragmentation hell that ARM used to suffer from, but a least in the Linux-capable CPU space (e.g., not highly embedded or tiny), a plethora of incompatible cores will probably not happen.
> Companies like Qualcomm would have zero incentives to share their designs with anyone.
ARM cores are also proprietary. All ARM cores actually, you can't get an architectural license from ARM to create an open source core. With RISCV at least you can make open cores and there are some out there.
But opening the ISA is attacking a different level of the stack than the logic and silicon.
Does "fragmentation hell" refer to Age Old problem of massive incompatibility in the Linux codebase, or the more "modern" problem people refer to which is the problem of device trees and old kernel support for peripheral drivers? Because you can rest assured that this majestic process will not be changed at all for RISC-V devices. You will still be fed plenty of junkware that will require an unsupported kernels with blobs. The ISA isn't going to change the strategy of the hardware manufacturers.
Everyone has more or less the same access to relatively competitive Cortex and Neoverse cores. As ARM’s finances show that’s not a very good business model so it’s unlikely anyone would do that with RISC-V.
You can make opensource cores, but nobody investing massive amounts of money/resources required to design high end CPUs will make them open source. The situation with ARM is of course not ideal but at least the playing field is somewhat more even.
And yet, that's what linux did in 1991- they shared the code, lowering the cost of buying an operating system. I wouldn't say there is zero incentive, but it certainly lowers the incentive without a profitable complementary hardware implementation that can be sold for less than the proprietary isa when there is a royalty free license allows the manufacturer/fab designer "mask rights" to get a small margin within the difference of the foundry/ISA proprietary core competitor.
Even if Qualcomm makes their own RISC-V chips that are somehow incompatible with everyone else's, they can't advertise that it's RISC-V due to the branding guidelines. They should know them because they are on the board as a founding top tier member.
Unless it’s a superset of RISC-V. They can still have proprietary extensions
It really doesn't.
I agree an actual open ISA is far preferable, ARM is not much different than x86.
No, it didn’t. It ruled that the specific copying and use of that Google did with Java in Android was fair use, but did not rule anything as blanket as “you can copy an API as long as you re-implement it”.
1. Breyer's majority statement presupposes API's are copyrightable, without declaring it or offering any kind of test on whats acceptable.
There is no clear preexisting national case law on API copyrightability, and it is unclear how other, more general, case law would apply to APIs categorically (or even if it would apply a categorical copyrightable-or-not rule), so, no, its not “ok”, its indeterminant.
Edit: none of the large companies (except Oracle) are foolish enough to pursue a rule that declares APIs as falling under copyright because they all do it. In Google v. Oracle, Microsoft files briefs support both sides after seemingly changing their mind. In lower courts, they submitted an amicus brief supporting Oracle, then when it got to SCOTUS, they filed one supporting Google, stating how disastrous it would be to the entire industry.
Qualcomm has 50,000 employees, $51 billion assets and $35 billion revenue https://en.wikipedia.org/wiki/Qualcomm
ARM Holdings has 7000 employees, $8 billion assets and $3 billion revenue https://en.wikipedia.org/wiki/Arm_Holdings
I think "slightly bigger" is an understatement.
Or put another way -- as they said in gawker[1] -- if you're in a lawsuit with a billionaire you better have a billionaire on your side or you're losing.
In this case -- it's unlikely that qualcomm will have quite enough juice to just smoosh Arm, in the same way that they would be able to just smoosh a company that's 100th the size of arm (not just 1/10th), regardless of the merits of the case.
I'm not really sure what you're responding to. It's got nothing to do with size whether or not something is fair, it's what is in the contract. None of us know exactly what's there so if it becomes disputed then a court is going to have to decide what is fair.
But that was entirely not the point of my comment though. I'm talking about how corporations looking to make chips or get into the ecosystem view ARM and its behavior with its architecture and licensing. ARM Ltd might well be in the right here by the letter of their contracts, but that canceling their customer's license (ostensibly if not actually siding with another customer who is in competition with the first) is just not a good look for positioning they are going for.
I'm not saying that's what happened or that ARM did not try to negotiate and QC was being unreasonable and the whole thing has nothing at all to do with Apple, or that ARM had any better options available to them. Could be they were backed into a corner and couldn't do anything else. I don't know. That doesn't mean it's not bad optics for them though.
Traditionally they've been known as a tech company that employs more lawyers than engineers, if that tells you anything.
I'd probably go up against IBM or Oracle before I tugged on Qualcomm's cape. Good luck to ARM, they'll need it.
I think long term is doing a lot of heavy lifting here. How long until:
1. Qualcomm develops a chip that competitive in performance to ARM
2. The entire software world is ready to recompile everything for RISC-V
Unless you are Apple I see such a transition taking a decade easily.
Virtually all high performance processors these days operate on their own internal “instructions”. The instruction decoder at the very front of the pipeline that actually sees ARM or RISC-V or whatever is a relatively small piece of logic.
If Qualcomm were motivated, I believe they could swap ISAs relatively easily on their flagship processors, and the rest of the core would be the same level of performance that everyone is used to from Qualcomm.
This isn’t the old days when the processor core was deeply tied to the ISA. Certainly, there are things you can optimize for the ISA to eke out a little better performance, but I don’t think this is some major obstacle like you indicate it is.
> 2. The entire software world is ready to recompile everything for RISC-V
#2 is the only sticking point. That is ARM’s only moat as far as Qualcomm is concerned.
Many Android apps don’t depend directly on “native” code, and those could potentially work on day 1. With an ARM emulation layer, those with a native dependency could likely start working too, although a native RISC-V port would improve performance.
If Qualcomm stopped making ARM processors, what alternatives are you proposing? Everyone is switching to Samsung or MediaTek processors?
If Qualcomm were switching to RISC-V, that would be a sea change that would actually move the needle. Samsung and MediaTek would probably be eager to sign on! I doubt they love paying ARM licensing fees either.
But, all of this is a very big “if”. I think ARM is bluffing here. They need Qualcomm.
Why not? MediaTek is very competitive these days.
It would certainly perform better than a RISC-V decoder slapped onto a core designed for ARM having to run emulation for games (which is pretty much the main reason why you need a lot of performance on your phones).
Adopting RISC-V is also a risk for the phone producers like Samsung. How much of their internal tooling (e.g. diagnostics, build pipelines, testing infrastructure) are built for ARM? How much will performance suffer, and how much will customers care? Why take that risk (in the short/medium term) instead of just using their own CPUs (they did it in some generations) or use MediaTek (many producers have experience with them already)?
Phone producers will be happy to jump to RISC-V over the long term given the right incentives, but I seriously doubt they will be eager to transition quickly. All risks, no benefits.
You're talking essentially about microcode; this has been the case for decades, and isn't some new development. However, as others have pointed out, it's not _as_ simple as just swapping out the decoder (especially if you've mixed up a lot of decode logic with the rest of the pipeline). That said, it's happened before and isn't _impossible_.
On a higher level, if you listen to Keller, he'll say that the ISA is not as interesting - it's just an interface. The more interesting things are the architecture, micro-architecture and as you say, the microcode.
It's possible to build a core with comparable performance - it'll vary a bit here and there, but it's not that much more difficult than building an ARM core for that matter. But it takes _years_ of development to build an out-of-order core (even an in-order takes a few years).
Currently, I'd say that in-order RISC-V cores have reached parity. Out of order is a work in progress at several companies and labs. But the chicken-and-egg issue here is that in-order RISC-V cores have ready-made markets (embedded, etc) and out of order ones (mostly used only in datacenters, desktop and mobile) are kind of locked in for the time being.
> Many Android apps don’t depend directly on “native” code, and those could potentially work on day 1.
That's actually true, but porting Android is a nightmare (not because it's hard, but because the documentation on it sucks). Work has started, so let's see.
> With an ARM emulation layer, those with a native dependency could likely start working too, although a native RISC-V port would improve performance.
I wonder what the percentage here is... Again, I don't think recompiling for a new target is necessarily the worst problem here.
> You're talking essentially about microcode; this has been the case for decades, and isn't some new development.
Microcode is much less used nowadays than in the past. For instance, several common desktop processors have only a single instruction decoder capable of running microcode, with the rest of the instruction decoders capable only of decoding simpler non-microcode instructions. Most instructions on typical programs are decoded directly, without going through the microcode.
> However, as others have pointed out, it's not _as_ simple as just swapping out the decoder
Many details of an ISA extend beyond the instruction decoder. For instance, the RISC-V ISA mandates specific behavior for its integer division instruction, which has to return a specific value on division by zero, unlike most other ISAs which trap on division by zero; and the NaN-boxing scheme it uses for single-precision floating point in double-precision registers can be found AFAIK nowhere else. The x86 ISA is infamous for having a stronger memory ordering than other common ISAs. Many ISAs have a flags register, which can be set by most arithmetic (and some non-arithmetic) instructions. And that's all for the least-privileged mode; the supervisor or hypervisor modes expose many more details which differ greatly depending on the ISA.
All quite true, and to that, add things like cache hints and other hairy bits in an actual processor.
But Qualcomm have already been working on RISC-V for ages so I wouldn't be too surprised if they already have high performance designs in progress.
That makes a lot of sense. RISC-V is really not at all close to being at parity with ARM. ARM has existed for a long time, and we are only now seeing it enter into the server space, and into the Microsoft ecosystem. These things take a lot of time.
> I still think the scope of work would be relatively small
I'm not so sure about this. Remember that an ISA is not just a set of instructions: it defines how virtual memory works, what the memory model is like, how security works, etc. Changes in those things percolate through the entire design.
Also, I'm going to go out on a limb and claim that verification of a very high-powered RISC-V core that is going to be manufactured in high-volume is probably much more expensive and time-consuming than the case for an ARM design.
edit: I also forgot about the case with Qualcomm's failed attempt to get code size extensions. Using RVC to approach parity on code density is expensive, and you're going to make the front-end of the machine more complicated. Going out on another limb: this is probably not unrelated to the reason why THUMB is missing from AArch64.
Why do you say this?
- People who have been working with spec and technology for decades
- People who have implemented ARM machines in fancy modern CMOS processes
- Stable and well-defined specifications
- Well-understood models, tools, strategies, wisdom
I'm not sure how much of this exists for you in the RISC-V space: you're probably spending time and money building these things for yourself.
And there is already some companies specializing on supplying this market. They do consistently present at RISC-V Summit.
Raspberry Pi RP2350 already ships with ARM and RISC-V cores. https://www.raspberrypi.com/products/rp2350/
It seems that the RISC-V cores don't take much space on the chip: https://news.ycombinator.com/item?id=41192341
Of course, microcontrollers are a different from mobile CPUs, but it's doable.
What is being discussed is taking an ARM design and modifying it to run RISC-V, which is not the same thing as what Raspberry Pi has done and is not as simple as people are implying here.
Apple at least have full control on hardware stack (Qualcomm do not as they only sells chips to others).
Most OEMs don’t have much hardware secret sauce besides maybe cameras these days. The biggest OEMs probably have more hardware secret sauce, but they also should have correspondingly more software engineers who know how to write hardware drivers.
If Qualcomm moved their processors to RISC-V, then Qualcomm would certainly provide RISC-V drivers for their GPUs, their cellular modems, their image signal processors, etc. There would only be a little work required from Qualcomm’s clients (the phone OEMs) like making sure their fingerprint sensor has a RISC-V driver. And again, if Qualcomm were moving… it would be a sea change. Those fingerprint sensor manufacturers would absolutely ensure that they have a RISC-V driver available to the OEMs.
But, all of this is very hypothetical.
It's weird af that Geerling ignores nVidia. They have a line of ARM based SBCs with GPUs from Maxwell to Ampere. They have full software support for OpenGL, CUDA, and etc. For the price of an RPi 5 + discreet GPU, you can get a Jetson Orin Nano (8 GB RAM, 6 A78 ARM cores, 1024 Ampere cores.) All in a much better form factor than a Pi + PCIe hat and graphics card.
I get the fun of doing projects, but if what you're interested in is a working ARM based system with some level of GPU, it can be had right now without being "in the shop" twice a week with a science fair project.
“With the PCI Express slot ready to go, you need to choose a card to go into it. After a few years of testing various cards, our little group has settled on Polaris generation AMD graphics cards.
Why? Because they're new enough to use the open source amdgpu driver in the Linux kernel, and old enough the drivers and card details are pretty well known.
We had some success with older cards using the radeon driver, but that driver is older and the hardware is a bit outdated for any practical use with a Pi.
Nvidia hardware is right out, since outside of community nouveau drivers, Nvidia provides little in the way of open source code for the parts of their drivers we need to fix any quirks with the card on the Pi's PCI Express bus.”
Reference = https://www.jeffgeerling.com/blog/2024/use-external-gpu-on-r...
I’m not in a position to evaluate his statement vs yours, but he’s clearly thought about it.
https://opensource.googleblog.com/2023/10/android-and-risc-v...
Maybe that was the case back then, too, and helped with software availability?
No, Android apps ship the original bytecode which then gets compiled (if at all) on the local device. Though that doesn't change the result re compatibility.
However – a surprising number of apps do ship native code, too. Of course especially games, but also any other media-related app (video players, music players, photo editors, even my e-book reading app) and miscellaneous other apps, too. There, only the original app developer can recompile the native code to a new CPU architecture.
Google Play Cloud Profiles is what I was thinking of, but I see it only starts “working” a few days after the app starts being distributed. And maybe this is merely a default PGO profile, and not a form of AOT in the cloud. The document isn’t clear to me.
https://developer.android.com/topic/performance/baselineprof...
If that's true, then what is arm licensing to Qualcomm? Just the instruction set or are they licensing full chips?
Sorry for the dumb question / thanks in advance.
In the past, Qualcomm designed their own CPU cores (called Kryo) for smartphone processors, and just made sure they were fully compliant with ARM’s instruction set, which requires an Architecture License, as opposed to the simpler Technology License for a predesigned off the shelf core. Over time, Kryo became “semi-custom”, where they borrowed from the off the shelf designs, and made their own changes, instead of being fully custom.
These days, their smartphone processors have been entirely based on off the shelf designs from ARM, but their new Snapdragon X Elite processors for laptops include fully custom Oryon ARM cores, which is the flagship IP that I was originally referencing. In the past day or two, they announced the Snapdragon 8 Elite, which will bring Oryon to smartphones.
2 - thank you again for sharing your eink hacking project!
There are multiple approaches here. There's this tendency for each designer to think their own way is the best.
That said, licensing an instruction set seems strange. With very different internal implementations, you'd expect instructions and instruction patterns in a licensed instruction set to have pretty different performance characteristics on different chips leading to a very difficult environment to program in.
If you look at the incumbent ISAs, you'll find that most of the time ISA and microarchitecture were intentionally decoupled decades ago.
This is only true if the application is written purely in Java/Kotlin with no native code. Unfortunately, many apps do use native code. Microsoft identified that more than 70% of the top 100 apps on Google Play used native code at a CppCon talk.
>I think ARM is bluffing here. They need Qualcomm.
Qualcomm's survival is dependent on ARM. Qualcomm's entire revenue stream evaporates without ARM IP. They may still be able to license their modem IP to OEMs, but not if their modem also used ARM IP. It's only a matter of time before Qualcomm capitulates and signs a proper licensing agreement with ARM. The fact that Qualcomm's lawyers didn't do their due diligence to ensure that Nuvia's ARM Architecture licenses were transferable is negligent on their part.
Aside from the philosophy, lots of practical work has been done and is ongoing. On the systems level, there has already been massive ongoing work. Alibaba for example ported the entirety of Android to RISC-V then handed it off to Google. Lots of other big companies have tons of coders working on porting all kinds of libraries to RISC-V and progress has been quite rapid.
And of course, it is worth pointing out that an overwhelming majority of day-to-day software is written in managed languages on runtimes that have already been ported to RISC-V.
It redirects calls to x86 libraries to native RISC-V versions of the library.
Although from Google's point of view the NDK only purpose is for enabling writing native methods, reuse of C and C++ libraries, games and real time audio, from point of view of others, it is how they sneak Cordova, React Native, Flutter, Xamarin,.... into Android.
68k -> PPC -> x86 -> ARM, with the 64 bit transition you mixed in there for good measure (twice!).
Has any other consumer company pulled a full architecture switch off? Companies pulled off leaving Alpha and Sparc but that was servers which has a different software landscape.
The ARM transition wasn’t strictly necessary like the last ones. It had huge benefits for them, so it makes sense, but they also knew what they were doing by then.
In your examples (which are great) Intel wasn’t going to die. They had backups, and many of those seem guided more by business goals than a do-or-die situation.
I wonder if that’s part of why they failed.
But the most important part for the working of the transition is probably that, in any of theses cases, the typical final user didn't even notice. Yes a lot of Hackernews-like people noticed as they had to recompile some of their programs. But most people :tm: didn't. They either use AppStore apps, which were fixed ~immediately or Rosetta made everything runnable, even if performance suffered.
But that's pretty much the requirement you have: You need to be handle to transition ~all users to the new platform with ~no user work and even without most vendors doing anything. Intel never could provide that, not even aim for it. So they basically have to either a) rip their market in pieces or b) support the "deprecated" ISA forever.
I think a very important part was that even with the Rosetta overhead, most x86 programs were faster on the m1 than on the machines which it would have been replacing. It wasn’t just that you could continue using your existing software with a perf hit; your new laptop actually felt like a meaningful upgrade even before any of your third party software got updated.
But they weren’t going to be left in the performance dust like the last times. Their chip supplier wasn’t going to stop selling chips to them.
They would have likely had to give up on how thin their laptops were, but they could have continued on just fine.
I do think the ARM transition wasn’t strictly good, it let them stay thin and quiet and cooler. They got economies of scale with their phone chips.
But it wasn’t necessary to the degree the previous ones were.
That’s a total typo I didn’t catch in time. I’m not sure what I tried to type, but I thought the transition was good. They didn’t have to but I’m glad they did.
Considering the commercial failure of these efforts, I might disagree
Done. Qualcomm is currently gunning for Intel.
2. The entire software world is ready to recompile everything for RISC-V
Android phones use a virtual machine which is largely ported already. Linux software is largely already ported.
But ARM and RISC-V are relatively similar and it's easy to add custom instructions to RISC-V to make them even more similar if you want so you could definitely do something like Rosetta.
It's an investment with a cost and a payoff like any other investment.
Most of the Android ecosystem already runs on a VM, Dalvik or whatever it's called now. I'm sure Android RISC-V already runs somewhere and I don't see why it would run any worse than on ARM as long as CPUs have equal horsepower.
That’s what Oryon is, in theory.
This would suggest that RISC-V is starting from scratch.
Yet in reality it is well underway; RISC-V is rapidly growing the strongest ecosystem.
Qualcomm is more or less a research company, the main cost of their business is paying engineers to build their modems/SoCs/processors/whatever.
They have been working with ARM for the last, I dont know, 20 years? Even if they manage to switch to RISC-V, and each employee has negative performance impact of like 15% for 2-3 years this ends up in billions of dollars, because you have to hire more people or lose speed.
If corporate would force me to work with idk Golang instead of TypeScript I could certainly manage to do so, but I would be slower for a while, and if you extrapolate that on an entire company this is big $$.
Yes and 9 women can make a baby in 1 month :)
Which is a 9x output.
Production and development requires multiple parties. This mythical man month stuff is often poorly applied. Many parts of research and development need to be done in parallel.
I think the most evil thing to do would be to switch places: TS for backend, Go for frontend. It can certainly work though!
But I like to imagine the Web frontend made in Go, compiled to WASM. Would be a fun project, for sure.
Not even close. Android OEM's can easily switch to the MediaTek 9400 that delivers the same performance as the Qualcomm high-end mobile chip at a significantly reduced price or even the Samsung Exynos. Qualcomm, on the other hand, has everything to lose as most of their profits rely on the sales of high-end Snapdragon chips to Android OEM's.
Qualcomm thought they were smart by trying to use the Nuvia ARM design license, which was not transferable, as part of their acquisition instead of doing the proper thing and negotiating a design license with ARM. Qualcomm is at the mercy of ARM as ARM has very many revenue streams and Qualcomm does not. It's only a matter of time before Qualcomm capitulates and does the right thing.
I'm sure there are folks like SiFive that have much of this, but how is it competitively I don't know, and how the next snapdragon would compete if even one of those areas is lacking... Interesting times.
Around 30-40% of Android apps published on play store include native binaries. Such apps need to be recompiled for RISC-V otherwise they won’t run. Neither Qualcomm nor Google can do that because they don’t have source codes for these apps.
It’s technically possible to emulate ARMv8 on top of RISC-V, however doing so while keeping the performance overhead reasonable is going to be insanely expensive in R&D costs.
Another obstacle, even if Qualcomm develops an awesome emulator / JIT compiler / translation layer, I’m not sure the company is in the position to ship that thing to market. Unlike Apple, Qualcomm doesn’t own the OS. Such emulator would require extensive support in the Android OS. I’m not sure Google will be happy supporting huge piece of complicated third-party software as a part of their OS.
P.S. And also there’re phone vendors who actually buy chips from Qualcomm. They don’t want end users to complain that their favorite “The Legendary Cabbage: Ultimate Loot Garden Saga” is lagging on their phone, while working great on a similar ARM-based Samsung.
Yeah, for the upcoming/already happening 64-bit-only transition (now that Qualcomm is dropping 32-bit support from their latest CPU generations), Google has decided to go for a hard cut-off, i.e. old apps that are still 32-bit-only simply won't run anymore.
Though from what I've heard, some third party OEMs (I think Xiaomi at least?) still have elected to ship a translation layer for their phones at the moment.
The idea was obviously an attempt at making it as easy as possible to replace ARM with RISC-V without having to rework much of the core.
https://lists.riscv.org/g/tech-profiles/attachment/332/0/cod...
But, by now, it is expected that Qualcomm's RISC-V designs have been re-aligned to match the reality that Qualcomm does not control the standard.
On the in-order side, I can see on-par performance with the ARM A5x series quite easily.
SiFive claims a SPECint2006 score of > 12/GHz, meaning that it'll get a performance of about 24 at 2 GHz or ~31 at 2.6 GHz, making it on par with an A76 in terms of raw performance.
question: isn't arm somewhat apple?
...Advanced RISC Machines Limited and structured as a joint venture between Acorn Computers, Apple, and VLSI Technology.
Not for decades. Apple sold its stake in ARM when Steve Jobs came back, they needed the money to keep the company going.
That is a HUGE cost!
You think Qualcomm is larger than Apple?
There are nearly 2B smartphones sold each year and only 200M laptops, so Apple's 20M laptop sales are basically a rounding error and not worth considering.
As 15-16% of the smartphone market, Apple is generally selling around 300m phones. I've read that Qualcomm is usually around 25-35% of the smartphone market which would be 500-700M phones.
But Qualcomm almost certainly includes ARM processors in their modems which bumps up those numbers dramatically. Qualcomm also sells ARM chips in the MCU/DSP markets IIRC.
Apple did not help them design the CPU/Architecture, that was a decade of design and manufacturing already, they VC'ed the independence of the CPU. The staffing and knowledge came from Acorn.
I believe they had a big hand in ARM64. Though best reference I can find right now is this very site: https://news.ycombinator.com/item?id=31368489
They had the Newton project, found ARM did a better job than the other options, but there were a few missing pieces. They funded the spun out project so they could throw ARM a few new requirements for the CPU design.
As a "cofounder" of ARM, they didn't contribute technical experience and the architecture did already exist.
Qualcomm pays them.
> Qualcomm moves to RISC-V
That’s like chopping your foot off to save on shoes…
It would take years for Qualcomm to develop a competitive RISC-V chip. Just look at how long it took them to design a competitive ARM core…
Of course they could use this this threat (even if it’s far-fetched) to negotiate a somewhat more favorable settlement.
what about Apple?
This is about Nuvia.
https://www.qualcomm.com/products/mobile/snapdragon/smartpho...
810 had a 64-bit core designed by ARM
https://www.qualcomm.com/products/mobile/snapdragon/smartpho...
820/821 had a 64-bit Kryo custom core designed by Qualcomm
https://www.qualcomm.com/products/mobile/snapdragon/smartpho...
After that it was all cores from ARM. The custom CPU team worked on their server chip before getting cancelled and most of the team went to Microsoft
865 (2019) has Cortex-A77 + Kryo 4xx Silver 888 (2002) uses Cortex-X1 + Cortex-A78 + Cortex-A55 cores
Snapdragon 865 has standard Arm cores. The same is true for the older Snapdragon 855, Snapdragon 845 and Snapdragon 835, which I am using or I have used in my phones.
The claim of Qualcomm that those cores have been "semi-custom", is mostly BS, because the changes made by Qualcomm to the cores licensed from Arm have been minimal.
https://www.qualcomm.com/products/mobile/snapdragon/smartpho...
I worked on it in 2014. The table does have 808 listed. That may have been a lower end version.
Qualcomm got caught being late. They were continuing development of custom 32-bit cores and Apple came out with a 64-bit ARM core in the iPhone. The Chief Marketing Officer of Qualcomm called it a gimmick but Apple was a huge customer of Qualcomm's modems. Qualcomm shoved him off to the side for a while.
https://www.cnet.com/tech/mobile/qualcomm-gambit-apple-64-bi...
Because Q's custom 64-bit CPU was not ready the stop gap plan was to license a 64-bit RTL design from ARM and use that in the 810. It also had overheating problems but that's different issue. There were a lot of internal politics going on at Q over the custom cores and server chips that ended up in layoffs.
Q is investing over $1billion into RISC-V.
ARM is fucked long term. Sure Qualcomm themselves are no angel. But the absurdities of this case are basically making ARM toxic to any serious long term investment. Especially when ARM is in Apple's pocket and ARM isn't releasing any chip designs competitive with Apple's chips where they get free reign to do as they want. Basically a permanent handicap on ARM chip performance.
Qualcomm have been acting badly for years, including attempting to turn RISC-V into Arm64 but without the restrictions. You cannot trust people that behave like this, where everything they do is important and everything you do is worthless.
The funny thing is Qualcomm do have some wildly impressive tech which is kept secret despite being so ubiquitous, but they have had persistent execution failures at integrations which lead to them throwing their partners under the bus.
Qualcomm have the same sort of corporate difficulty you see at Boeing, only in a less high profile sector.
I found it telling that every single smartphone vendor refused to license Qualcomm's proprietary tech for smartphone to satellite messaging.
> In a statement given to CNBC, Qualcomm says smartphone makers “indicated a preference towards standards-based solutions” for satellite-to-phone connectivity
https://arstechnica.com/gadgets/2023/11/qualcomm-kills-its-c...
Usually that gun is the latest wireless standard like 4g or 5g.
It was their now mostly irrelevant CDMA patents that Qualcomm used as a weapon against device makers.
> Many carriers (such as AT&T, UScellular and Verizon) shut down 3G CDMA-based networks in 2022 and 2024, rendering handsets supporting only those protocols unusable for calls, even to 911.
https://en.m.wikipedia.org/wiki/Code-division_multiple_acces...
In my opinion, Qualcomm's abuse of their CDMA patents is the reason that zero device makers were willing to get on board with a new Qualcomm proprietary technology.
Apple's modem is said to be shipping this coming spring in the newest iPhone SE iteration.
Google's Pixel phone lineup has used Samsung's modems for generations now.
You are saying Qualcomm doesn't have competition because Qualcomm make the best modem and others making worst product cant compete?
The Exynos chipset is cursed, Samsung only ships it in markets where performance is a lower priority than price, hence not shipping Exynos in the US outside the Google Pixel whitelabel relationship.
I thought it was primarily because of some patent/royalty dispute with Qualcomm?
And/or it not having support for CDMA which was not relevant outside of the US. Now that it’s not an issue I wouldn’t be surprised if Samsung would transition to Exynos eventually (they are already apparently selling some models).
Exynos 5G New Radio chipsets got really bad with the Pixel 6 series, where the phone randomly loses cell signal and WiFi at the same time in areas with strong signal, and the only way to get back online is to put the phone in airplane mode or reboot the phone, sometimes neither works though.
Means 'no'."
This flew past me, do you have a link?
> This time last year they were all over the RISC-V mailing lists, trying to convince everyone to drop the "C" extension from RVA23 because (basically confirmed by their employees) it was not easy to retrofit mildly variable length RISC-V instructions (2 bytes and 4 bytes) to the Aarch64 core they acquired from Nuvia. At the same time, Qualcomm proposed a new RISC-V extension that was pretty much ARMv8-lite.
This is enough of a philosophy change to break existing RISC-V software, and so is purely motivated by a desire to clone IP they supposedly licensed as honest brokers.
This means that if a RISC-V reads a 16 byte block from instruction memory, it only has to look at 8 pairs of bits. This would require 8 NAND gates plus 8 more NANDs to ignore the top half of any 32 bit instructions. That is 4x(8+8)=64 transistors.
The corresponding circuit for x86 would be huge.
But note that this just separates the instructions. You still have to decode them. Most simple RISC-V implementations have a circuit that transforms each 16 bit instruction into the corresponding 32 bit one, which is all the rest of the processor has to deal with. Here are the sizes of some such circuits:
Hazard 3: 733 NANDs (used in the Raspberry Pi RP2350)
SERV: 532 NANDs (serial RISC-V)
Revive: 506 NANDs (part of FPGAboy)
You would need 8 such circuits to handle the maximum 16 bit instructions in a 16 byte block, and then you would need more circuits to decode the resulting 32 bit instructions. So the 16 NANDs to separate the variable length instructions is not a problem like it is for other ISAs.
The problem with 16 bit instructions for small RISC-V implementations is that now 32 bit instructions will not always be aligned with 32 bit words. Having to fetch an instruction from two separate words adds circuits that can be a large fraction of a small design.
So processing a fetched 16 bytes requires doing 16 partial decodes, each of them non-trivial (skip over unordered prefix bytes; parse fixed prefix before opcode (most bytes are the opcode immediately, except 0x0F, 0xC4, 0xC5, and more for rarer instructions & AVX-512/APX, which have extra skipping necessary); index a LUT by the opcode (with a different table for the 0x0F case, and maybe more cases idk; plus maybe some more length depending on byte after opcode; for some opcodes the length even depends on if a specific prefix byte was given, which intel just assumes doesn't happen and behaves slowly if it does); if the opcode needs ModR/M, also parse that from the next byte (another variable-length sequence)).
ARM Ltd is basically rent-seeking on keeping the ISA proprietary, same as Intel and AMD do. Sure it's a highly specialized and very impressive task to make an ISA, but not to the value of extracting hundreds of millions of dollars ever year. If ARM gets upset because someone wanted fixed-length instructions standardized in RISCV that's really the height of hypocrisy.
ARM never said that.
>but not to the value of extracting hundreds of millions of dollars ever year.
ARM makes money on selling ARM design, not from their ISA licensing.
Someone did.
> ARM makes money on selling ARM design, not from their ISA licensing.
That's not correct, they make money from ISA licensing. That's called their architectural license, and that is what is being canceled here.
Qualcomm had one type of ARM license, granting them one type of IP at one royalty rate.
A startup called "Nuvia" had a different type of ARM license, granting them more IP but at a higher royalty rate. Nuvia built their own cores based on the extra IP.
Then Qualcomm brought Nuvia - and they think they should keep the IP from the Nuvia license, but keep paying the lower royalty rate from the Qualcomm license.
ARM offer a dizzying array of licensing options. Tiny cores for cheap microcontrollers, high-end cores for flagship smartphones. Unmodifiable-but-fully-proven chip layouts, easily modifiable but expensive to work with verilog designs. Optional subsystems like GPUs where some chip vendors would rather bring their own. Sub-licensable soft cores for FPGAs. I've even heard of non-transferable licenses - such as discounts for startups, which only apply so long as they're a startup.
If Nuvia had a startup discount that wasn't transferable when they were acquired, and Qualcomm has a license with a different royalty rate but covering slightly different IP, I can see how a disagreement could arise.
[1] https://www.theregister.com/2022/08/31/arm_sues_qualcomm/
But it's totally common for corporations to make value-destroying acquisitions. Some research suggests 60%-90% of mergers actually reduce shareholder value. Look at the HP/Autonomy acquisition, for example - where the "due diligence" managed to overlook a $5 billion black hole in a $10 billion deal. And how often have we seen a big tech co acquire a startup only to shut it down?
Mergers only seem rational because once a mistake is set in stone, the CEO usually has to put a brave face on it and declare it a big success.
I could certainly believe during the acquisition process that the specifics of Nuvia's license were overlooked, or not fully understood by the person who read them.
Or maybe there is no such language in the contract and Arm is over-extending, but that sounds unlikely.
- Qualcom has a "Technology license". Because ARM design the entire chip under that license, ARM charge a premium royalty.
- Nuvia had an "Architectural licence" (the more basic licence). Nuvia then had to design the chip around that foundation architecture (i.e. Nuvia did more work). The architectural license has a lower royalty.
Qualcom decided they were using Nuvia chips, and therefore should pay Nuvia's lower royalty rate.
ARM decided that Nuvia's chips were more or less ARM technology chips, or possibly that Nuvia's license couldn't be transferred, and therefore the higher royalty rate applied.
An ALA signed with ARM gives the right to design CPU cores that are conformant to the Arm Architecture specification. When the CPU cores that are designed thus are sold, a royalty must be paid to ARM.
The royalties negotiated by Nuvia were much higher than those negotiated by Qualcomm, presumably based on the fact that Qualcomm sells a huge number of CPU cores, while Nuvia was expected to sell few, if any.
When Qualcomm has bought Nuvia, ARM has requested that Qualcomm shall pay the royalties specified by the Nuvia ALA, for any CPU cores that are derived in any way from work done at Nuvia. Qualcomm has refused, claiming that they should pay the smaller royalties specified by the Qualcomm ALA.
Then ARM has cancelled the Nuvia ALA, so they claim that any cores designed by Qualcomm that are derived from work done at Nuvia are unlicensed, so Qualcomm must stop any such design work, destroy all design data and obviously stop selling any products containing such CPU cores.
The trial date is in December and ARM has given an advance notice that they will also cancel the Qualcomm ALA some time shortly after the trial. So this will have no effect for now, it is just a means to put more pressure on Qualcomm, so they might accept a settlement before the trial.
Qualcomm buying Nuvia should increase the revenue for ARM from the work done at Nuvia, because Qualcomm will sell far more CPU cores than Nuvia, so even with smaller royalties the revenue for ARM will be greater.
Therefore the reason why ARM does not accept this deal is because in parallel their revenue from the ARM-designed cores licensed to Qualcomm would drop soon to zero. Qualcomm has announced that they will replace the ARM-designed cores in all their products, from smartphones and laptops to automotive CPUs.
However, some sources [1] say the "architectural license" is "higher license fee, fewer use constraints, greater commercial and technical interaction"
There are often two parts to the cost of these licenses - an upfront fee, and a per-chip royalty. So it could be both at the same time: Nuvia, who made few chips, might have negotiated a lower upfront fee and a higher per-chip royalty. Whereas Qualcomm, who make lots of chips, might have prioritised a lower per-chip royalty, even if the upfront fee was greater.
[1] https://www.anandtech.com/show/7112/the-arm-diaries-part-1-h...
Incorrect according to Qualcomm. They claim Snapdragon X Elite & other cores are from scratch rebuilds, not using any of the design used for Nuvia.
They did however use engineers who had designed Nuvia. So there may be a noted resemblance in places. Latest Tech Poutine: 'you can't delete your mind.'
Obviously it's hard to know for sure - it could even be an Anthony Levandowski type situation, where an ambitious employee passes off an unauthorised personal copy as their own work without Qualcomm realising.
But getting to the point where a supplier of critical infrastructure pulls a figurative knife on one of their biggest customers for no particularly obvious reason is just insane. ARM Ltd. absolutely loses here (Qualcomm does too, obviously), in pretty much any analysis. Their other licensees are watching carefully and thinking hard about future product directions.
If you're selling physical chips and customer decides not to pay for their last shipment, you stop sending them chips. No need to get the courts involved; the customer can pay, negotiate, or do without.
But when you're selling IP and a customer decides not to pay? You can't stop them making chips using your IP, except by going through the courts. And when you do, people think you're "pulling a figurative knife on one of your biggest customers for no reason"
That might not be legally possible - or deemed to be anticompetitive. Cancelling an existing license if a firm has breached it would probably be less problematic.
By using its dominant position in Smartphone chipsets, Qualcomm is in progress to establish a custom ARM-architecture as the new standard for several industries, fragmenting the ARM-ecosystem.
For decades, ARM is carefully avoiding this to happen, by allowing selected partners to "explore" evolutions of the IP in an industry but with rules and methods to make sure they can't diverge too much from ARM's instruction set.
Qualcomm acquired Nuvia and now executes the plan of using their restricted IP in a unrestricted fashion for several industries ("powering flagship smartphones, next-generation laptops, and digital cockpits, as well as Advanced Driver Assistance Systems, extended reality and infrastructure networking solutions").
ARM has designed architectures which achieve comparable performance to Nuvia's IP (Blackhawk, Cortex-X), but Qualcomm's assumption is that they don't need it and that they can apply Nuvia's IP on top of their existing architecture without the need of licensing any new ARM design.
It is not overlooked. The duty of any company to protect its IP and contract during dispute is largely if not entirely irrelevant on internet inclusive but not limited to HN. They simply want whatever company they like to win and the one they hate to lose.
This has been shown repeatedly on Apple vs Qualcomm in modem and IP licensing. Monopoly trials, or Apple vs US etc.
And just want to say thank you. You are one of the very very few to the point I can count them with fingers, to actually dig into court case and not relying on whatever media decided to expose us to.
The IP of Nuvia was not supposed to be used in all the use-cases that Qualcomm intends to deploy it in (and moreover there is still the ongoing legal dispute that Qualcomm is actually not allowed to use it)
Afai, Q hasn't diverged from the standard instruction set at all in the Oryon snapdragons.
As soon as they violate the terms of their architectural license, which seemingly hasn't happened yet.
Also, the foundation of Qualcomm's "Oryon" is clearly Nuvia's "Phoenix" core, which is based on Arm’s v8.7-A ISA.
After Acquisition, Qualcomm formed a team to redesign Phoenix for use in consumer-products instead of servers, creating Oryon.
That's the issue they have. Qualcomm was/is confident to resolve this IP issue of the technical QCT-division via their licensing strong-arm QTL, forcing ARM into accepting Qualcomm's view.
However, they possibly overstepped a bit, as they also expect that they don't need to license newer CPU-designs from ARM because (like Apple) they built a custom design under their architecture license.
But in reality the core design of Oryon was in parts built under the license agreement of Nuvia, which has explicit limitations in transferability (only Nuvia as-is) and usage (only servers).
In court, Qualcomm doesn't even dispute that, they argue that this contract should not be enforced and hope that the court agrees.
Qualcomm is the supplier of high-performance ARM-based SoCs in consumer segment, with the best performing core design. ARM is not doing damage to Qualcomm here but to ARMv8/9s long-term survival.
I, for one, am greately unhappy about it because RISC-V is a disgusting design that happened to be in the right place at the right time, becoming yet another example of the industry pushing for abysmally inferior choice due to circumstantial factors. I sincerely hope it fails in all possible ways (especially the RVV extension) and a completely redone, better design that is very close to ARMv8-A takes its place.
But, in the meantime, we have ARMv8/9-A, which is the best general-purpose ISA, even with the shortcomings of SVE/2, where AVX family, with especially AVX512VL extension, is just so much better.
As of now, ARM is largely in control of the evolution of ARM architecture, because even by those with an ALA (like Qualcomm), ARM's CPU-designs are the reference for the evolution in the respective industries. Straying too much from those designs turned out to not be economically feasible for most players since the move to 64bit, which is a beneficial development for ARM as they can drive a harmonized ecosystem in different industries.
Now ARM gave Nuvia a very permissive license to cooperate on the creation of ARM-based architecture, for a segment where ARM was very weak: server-architecture. With the licensing contract explicitly limiting Nuvia to use the resulting IP only for servers and only to Nuvia.
Now regardless of the legal dispute, Qualcomm now plans to use this IP to create a design roadmap parallel to that of ARM, with a market-position in consumer smartphone SoC's funding a potential hostile takeover of several other industries where ARM carefully works to establish and maintain a competitive landscape.
Qualcomm's plan is to achieve something similar to Apple, but with the plan to sell the resulting chipset.
So while ARM is building and maintaining an ecosystem of ARM as a vendor-agnostic architecture-option in several industries, Qualcomm is on a trajectory to build up a consolidated dominant position in all those industries (which may end up forcing ARM to actually follow Qualcomm in order to preserve the ecosystem, with Qualcomm having little vested interest to support an ecosystem outside of Qualcomm).
i'm not sure this is true. certainly "chip" IP has been a real legal quagmire since, forever.
but it was my understanding that you could neither patent nor copyright simply an "instruction set".
presumably what you get from ARM with an architecture license would be patent licenses and the trademark. if so, what patents might be relevant or would be a problem if you were to make an "ARMv8-ish compatible" ISA/Architecture with a boring name? i haven't seen much about ARM that's architecturally particularly unique or new, even if specific implementation details may be patent-able. you could always implement those differently to the same spec.
to further poke at the issue, if it's patents, then how does a RISC-V CPU or other ISA help you? simply because it's a different ISA, doesn't mean its implementation doesn't trample on some ARM patents either.
if it's something to do with the ISA itself, how does that affect emulators?
what's ARM's IP really consist of when you build your own non-ARM IP CPU from scratch? anyone have examples of show-stopper patents?
Way back in the day there were some MIPS patents that only covered a few instructions so people would build not-quite-MIPS clone CPUs without paying any royalties.
1. Little companies don't sue big companies. No need. Startups exist because they have something new and move faster. 2. Big companies don't sue little companies. Too little money, it would look anticompetitive to the gov't, and most startups fail, anyway. 3. Medium companies sue little companies when they start taking away prime customers.
ARM sued Picoturbo and they settled. Lexra chose to fight MIPS. Lexra and MIPS hurt each other. That gave ARM the opportunity to dominate the industry.
On an unrelated topic, readers looking for concise basic info on patenting that your attorney might not mention might enjoy. https://www.probell.com/patents
sheesh, patent https://patents.google.com/patent/US4814976A/en is a real "gem"
but its probably a good example: faulty patent (later invalidated) to do something obvious
MIPS sues a company that doesn't even implement the odd instructions because it traps them, allowing a possibility of emulation.
there's literally no case here
just to sue them into oblivion and squish them with superior cash resources. and then to get squished by ARM because they weren't paying attention.
it's like a dark fairy tale. i hate corporate lawyers.
For example the Snapdragon 8 Gen 1 uses 1 ARM Cortex-X2, 3 ARM Cortex-A710 and 4 ARM Cortex-A510, which are ARM designs. Their latest announced chip though, Snapdragon 8 Elite, uses 8 Oryon cores, which Qualcomm designed themselves (after acquiring Nuvia).
So is Qualcomm not still able to create chips like the former, and just prevented from creating chips like the latter? Or does "putting a chip together" (surely there is a bit more going into it) like the Snapdragon 8 Gen 1 still count as custom design?
The reason being that ARM gave Nuvia a license to design cores at a specific rate, then Qualcomm bought them to use those cores. ARM claims that the license to design cores does not have a transferable rate to it.
Obviously, Arm tries to prevent Qualcomm from using their own cores, because this time Arm would lose a major source of their revenue if Qualcomm stopped licensing cores.
When Arm has given architectural licenses to Qualcomm and Nuvia, they were not worried about competition, because Qualcomm could not design good cores, while Nuvia had no perspective of selling so many cores for this to matter.
The merging of Nuvia into Qualcomm has changed completely the possible effect of those architectural licenses, so Arm probably considers that giving them has been a big mistake and they now try to mend this by cancelling them, with the hope that they will convince justice that this is not illegal.
For any non-Arm employee or shareholder, it is preferable for Arm to lose, unless the reduction in revenue for Arm would be so great as to affect their ability to continue to design improved cores for other companies and for other applications, but that is unlikely.
Your very first line is one to begin with.
ARM also doesn’t seem to care if QC design their own cores. They just care that they renegotiate the royalty agreement. This is clear if you actually read their statements.
Therefore Arm cares a lot if Qualcomm designs their own cores, because that would cause a smaller revenue for Arm.
If Arm had not cared whether Qualcomm designs their own cores, they would have never sued Qualcomm.
The official reason why Arm has sued Qualcomm, is not for increasing the royalties, because that has no legal basis.
It is obvious that the lawsuit is just a blackmail instrument to force Qualcomm to pay higher royalties for the cores designed by them, but the official object of the lawsuit is to forbid Qualcomm to design their own cores, by claiming that the Oryon cores used in the new Qualcomm chipsets for laptops, smartphones and automotive applications have been designed by violating the conditions of the architectural licenses granted by Arm to Qualcomm and Nuvia, so Arm requests that Qualcomm must stop making any products with these Arm-compatible cores and they must destroy all their existing core designs.
Again, your comments are pure conjecture not based on anything factual. I might as well just start saying how QC wants to rip off ARM IP and it would be as factually relevant as your comments.
ARM is perfectly happy for Qualcomm to design their own core as long as it was the agreed rate for those sector.
ARM is happy to compete on design, which is what the Cortex X5 is doing. And shown just as competitive against Oryon.
The rest of your comments are like making up stories to back up whatever you think is the truth. And most of them have zero factual basis.
What do Android OEMs do? They can’t use Apple chips, or now Qualcomm chips. Switching to another architecture is a big deal.
Would this basically hand the Android market to Samsung and their Exynos chips? Or does another short term viable competitor exist?
https://chromium.googlesource.com/chromiumos/third_party/ker...
https://chromium.googlesource.com/chromiumos/third_party/ker...
2. Mediatek is available, mostly with latest ARM's IP. And extremely competitive. The only thing missing is Qualcomm Modem. It isn't Mediatek's modem are bad, they are at least far better than whatever Apple's Intel Modem had used or planned. The only problem is Qualcomm is go good customers still prefer it for the relatively little price they are paying for.
3. It is not like Android OEM cant make their own SoC. Especially considering the smartphone market can now be largely grouped as Apple, Samsung and Chinese. Together they are 95%+ of Market share.
The S23 line was an exception in using Snapdragon worldwide, but then the S24 line switched back to using Snapdragon in NA and Exynos everywhere else, except for the S24 Ultra which is still Snapdragon everywhere.
Yes it's a confusing mess, and it's arguably misleading when the predominantly NA-based tech reviewers and influencers get the usually superior QCOM variant, and promote it to a global audience who may get a completely different SOC when they buy the "same" device.
Still, the fact that Samsung can swap out the chip in their flagship product with virtually no change other than slightly different benchmark scores means that these chips are pretty much fungible. If either manufacturer runs into serious problems, the other one is ready to eat their market share for lunch.
I'm wondering because to me as a layman it sounds like it's 'only' a different language, so why is it not that easy to take already existing designs and modify them to 'speak' that language and that's it?
Or is an ISA more than just a different 'language'?
Or is hardware not really the biggest problem, but rather Software like compilers, kernels, etc.?
You might think that languages are just have different words for the same things.
In reality the problems are where the same things don't exist. People don't view the world the same way and don't have an equivalent word. In Turkish it's very important whether your aunt is on your mother's side or your father's side so there are different words for each....but there's no words for "he" or "she" as they don't bother with gender in sentences.
So for example every conversation converted from Turkish to English loses an important bit of meaning about relationships and the sex of a person has to be inferred from context which is not easy to do automatically.
Similarly computer software....and there's a lot of it.
I mean, it's most likely overlaps and not actually the same set of people, but I find it ironic and funny.
So I personally think that groups of people have a common understanding and some commonly accepted attitude that make up their culture. The purpose of words is to reference those feelings. An outsider can understand to a degree because we are all human but they usually get the emphasis wrong and also tend to miss lots of implications.
Of course you as an entrant to a culture (e.g. a kid) are going to get educated over time about what it all means and you're going to be discouraged from expressing alternate cultural values because overall not enough people feel like that to have invented convenient ways of expressing it.
So language is going to affect you but as some idea becomes popular and needs expression people do invent new words. So you can affect it - if you can get enough people to pick up on your invention by adding a new idea to their mental model of life.
It tends to be more like going from C89 to Haskell. You're not just switching the keywords around, but also fundamental architectural concepts. There's still some parts you can recycle and some skills that transfer, but less than you'd like.
> Or is hardware not really the biggest problem, but rather Software like compilers, kernels, etc.?
That's the next problem. Kernels, device drivers, support hardware, a lot of low level stuff needs to be adapted, and even a company the size of Qualcomm doesn't necessarily do everything inhouse, there will be lots of external IPs involved and all those partners need to also be willing to migrate over to a different ISA.
Nevertheless, designing the ISA is the only easy part. Then you have to write high-quality compilers, assemblers, linkers, debuggers and various other tools, and also good documentation for the ISA and all the tools.
Developing such toolchains and convincing people to use them and educating them can take some years. Optimizing various libraries for which the performance is important for the new ISA can also take years.
These things are what the incumbent ISAs like Aarch64, RISC-V or POWER provide.
It has already entrenched itself in the industry.
There are still binaries that were compiled in the 80s running happily on an x86 system because the chip conforming to the ISA guarantees that machine instruction will run the same as it did in the 80s.
As for "only" a different language, absolutely lots of software does this. As part of Apple's move from x86 to ARM, they implemented a software called Rosetta which translates x86 instructions into ARM (also known as emulation). The only problem with this is that there's a performance penalty you pay for the emulation, which can make using a program slower, choppier, etc.