Posted by necubi 4 days ago
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
They have to be doing this deliberately, as it's hard to explain otherwise.
That would require pretending Ventana Veyron V2, Tenstorrent Ascalon/Alastor, SiFive P870, Akeana 5000-series and others do not exist or do not yet have any customers.
Pretending, because they actually exist, have customers, and are thus bound to show up in actual products anytime now.
I don’t think anyone said they don’t exist.
Like 30x slower than a top of the line Apple Mx series CPU. Maybe there is a high performing RISC-V chip out there but I haven't yet run into one.
RISC-V benchmarks: https://browser.geekbench.com/search?q=RISC-V. Compare to an Apple M4 benchmark: https://browser.geekbench.com/v6/cpu/8224953
That said, RISC-V is good for embedded applications where raw performance isn't a factor. I think no other markets are yet accessible to RISC-V chips until their performance massively improves.
For a generic logic workload like Fibo(24), the performance is essentially the same (quote from above page):
Average Runtime = 0.020015 Pico 2
Average Runtime = 0.019015 Pico 2 RiscV
Note that neither core on the RP2350 comes with advanced features like SIMD.RISC-V hedged it's bet by creating different design specs for smaller edge applications and larger multicore configurations. Right now ARM is going through a rift where the last vestiges of ARMv6/7 support is holding out for the few remaining 32-bit customers. But all the progress is happening on the bloating ARMv9 spec that benefits nobody but Apple and Nvidia. For all of ARM's success in the embedded world, it would seem like they're at an impasse between low-power and high-power solutions. RISC-V could do both better at the same time.
RISC-V is simply a ISA and not a core. The ISA affects some of the core architecture but the rest is also implementor specific. High-end cores will take time to reach market. Companies with big guns like Qualcomm can most likely pump out if they wanted to, and will most likely be doing so in the future since they are pumping over $1 billion into the effort.
And it's not been proven that RISC-V is a good match for the second group (yet).
Remember it's sometimes very non-obvious what quirks of an ISA might be difficult until you actually try to implement it - one of the reasons ARM had a pretty much "clean sheet" rewrite in ARMv8 is things like the condition codes turned out to be difficult to manage in wide superscalar designs with speculative execution - which is exactly the sort of thing required to meet the "laptop-tier" design performance requirements.
It may be they've avoided all those pitfalls, but we don't really know until it's been done.
Ventana Veyron looks interesting. Tenstorrent's upcoming 8-wide design should perform well.
Qualcomm pitched making a bunch of changes to RISC-V that would move it closer to ARM64 and make porting easier, so I think it's an understatement to say that they are considering the idea. If ISA doesn't matter, why pay tons of money for the ISA?
RISC-V was created because UC Berkeley's vector processor needed a scalar ISA, and the incumbents were not suitable. Then, it uncovered pre-existing interest in open ISAs, as companies started showing up with the desire for a frozen spec.
Legend is that MIPS quoted UC Berkeley some silly amount. We can thank them for RISC-V. Ironically, they ended up embracing RISC-V as well.
I think RISC-V chips in the wild do not do things like pipelining, out-of-order, register renaming, multiple int/float/logic units, speculation, branch predictors, smart caching.
I think all existing RISC-V chips in the wild right now are just simplistic in-order processors.
Back in 2016, BOOMv1 (Berkley Out-of-Order Machine) had pipelining, register renaming, 3-wide dispatch, branch predictor, caches, etc. A quick google seems to indicate that it was started in 2011 and had taped out 11 times by 2016 (with actual production apparently being done on IBM 45nm).
They are on BOOMv3 now.
[1] https://www.andestech.com/en/2024/01/05/andes-announces-gene... [2] https://www.sifive.com/cores/performance-p650-670
Exactly the same thing happened with ARM. It started in embedded, then phones, and finally laptops and servers. ARM was never slow, they just hadn't worked up to highly complex high performance designs yet.
I disagree. For example, the first PowerPC was pretty fast and went into flagship products immediately. Itanium also went directly for the high end market (it failed for unrelated reasons). RISC-V would be much better off if some beastly chips like Rivos were released early on.
With the ISA developed in the open, the base specs microcontrollers can target would naturally tend to be be ratified first, and thus microcontrollers would show up first. RVA22+V were ratified November 2021.
With the ISA developed inside working groups involving several parties, some slowness would be unavoidable, as they all need to agree on how to move forward. Thus the years of gap until RVA22+V from the time the privileged and unprivileged specs were ratified (2019).
RVA23 has just been ratified. This spec is on par with x86-64 and ARMv9, feature-wise. Yet hardware using it will of course in turn take years to appear as well.
Most cores are pipelined; it is RISC after all.
There are quite a few superscalar cores, even a c906 is superscalar.
The c910/c920 is an OoO, renaming core, with speculation.
What they're lacking is area and power. A ROB with six entries is not going to compete with a ROB of six hundred entries.
Not really comparable.
Also we should not pretend that ARM is just going to sit there waiting for RISC-V to catch up.
Embedded is moving to RISC-V where they have low performance needs.
One example is the Espressif line of CPUs - which have shipped over 1B units. They have moved most of their offerings to RISC-V over the last few years and they are very well supported by dev tools: https://www.espressif.com/en/products/socs
It was then made public that LG bought a license right away.
It certainly is easy to casually spread fear and doubt.
But it is really far-fetched to think that the people at Tenstorrent, who have successfully delivered very high performance microarchitectures in other companies before, are lying about Ascalon, and that LG is helping them do that.
It would even be more far fetched to claim that Ventana Veyron V2, SiFive P870, Akeana 5000-series, all of them available high performance IP, are lying about performance.
But I say this as a software guy who doesn't actually know CPU design.
It has taken them that long to make arm be a thing on windows and that’s building on people porting stuff to arm for Mac to finally get momentum.
RISC-V with windows will be an eternity to be feasible.
Windows branding is now forever tied with x86/x64 Win32 legacy compatibility, meanwhile WSL had captured back a lot of webdevs from Mac. Google continues to push Chrome, but Electron continues to grow side by side. Lots of stuff happening with AI on Linux too, with both Windows and Mac remaining to be consumer deployment targets. Phone CPUs are fast enough to run some games on WINE+Bochs.
At this point, would it not make sense for MS to make its own ChromeOS and bolt-on an "LSW"?
I think you’re over estimating what percentage of users use WSL. They’re an insignificant fraction of the user base.
And with games, I think you’re also overestimating how good the translation layers like Proton are, and how rapidly Microsoft advance DX as well.
Will it now?
Microsoft was already deeply involved in 2021 as per that years' summit RISC-V Foundation's technical talks. Ztso was pushed by them.
Windows for arm hails back to 2011. They’re only just now getting native arm ports for several major packages. That’s ~13 years for a well established architecture that’s used much more universally than RISC-V. They don’t even have arm ports for lots of software that has arm ports on macOS.
RISC-V will take an aeon longer to get a respectable amount of the windows ecosystem ported over.
Absolutely agree.
The key development Microsoft has demonstrated recently is the ability to run x86 Windows software in non-x86 Windows systems.
Now that this is in place -and will only get better-, there is no longer a chicken and egg situation.
Instead, what we have is a clearly defined path to migrate away from x86.
It is evident to anybody paying attention that Microsoft has RISC-V support well underway.
But even if they had to start from scratch, it would be much easier, thanks to ARM having paved the way.
Without the ISVs, it’s a flop for consumers.
MS has had an abysmal time getting them to join in on ARM, only starting to have a little success now. Saying “Ha ha, just kidding, it’s RISC-V now” would be a disaster. That’s the kind of rug pull that helped kill Windows Mobile.
Emulators aren’t good enough. They’re a stop gap. Unless the new chip is so much better than the old it’s faster with emulation then the old one was native no one will accept it long. Apple’s been there, but that’s not where MS sits today.
And if your emulator is too good, what stops ISVs from saying “you did it for us, we don’t have to care”? So once again they don’t have to do it at all and you have no native software.
MS can’t drop their ARM push unless they want to drop all non-x86 initiatives for a long time.
x86 emulation enables adoption.
Adoption means having an user base.
Having an user base means developers will consider making the platform a target.
>Saying “Ha ha, just kidding, it’s RISC-V now” would be a disaster.
Would it now? If anything, offering RISC-V support as well would further reinforce the idea that Windows is ISA-independent, and not tied to x86 anymore.
In case this translation was needed at all. The point is the point is not a "-riscv" compilation option.
Anymore? It’s been independent since the 90s. It’s only ISVs that have been an issue.
And a rug pull is a fantastic way to scare all the ISVs far far away.
But, as an example, Windows Phone 8 and later were based on the NT kernel. You already mentioned the 360.
The competition isn’t sitting still either and QC already hit this with Intel stealing their thunder with Lunar Lake. They’re efficient enough that the difference in efficiency is far overshadowed by their compatibility story.
Ecosystem support will always go to the incumbent and this would place RISC-V third behind x86 and ARM. macOS did this right by saying there’s only one true way forward. It forces adoption.
For native apps, you need users. For users, you need emulation.
It cannot be overstated how important successful x86 emulation is for the migration to anything else to be feasible.
The incumbent is the only two companies -Intel and AMD- that can make x86 hardware.
The alternative is the rest of the industry.
Thus having a migration path should be plenty on its own.
Intel and AMD can both join by making RISC-V or ARM hardware themselves. My take is that they will too, eventually, come around. Or they'll just disappear from relevance.
You have to think in network effects. You mention “the rest of the industry” yet ignore that it’s mostly arm , which would make arm the incumbent.
x86 is the king for windows. But ARM has massive inroads with mobile, and now desktop with macOS, and servers with Amazon/Nvidia etc
There’s a lot better incentive to support ARM than RISC-V for software developers. It isn’t one or the other , but it is a question of resources.
Intel and AMD seem fine turning x86 around when threatened as can be seen by Lunar Lake and Stryx Point. Both have been good enough to steal QC’s thunder. You don’t think ARM manufacturers will do the same to RISC-V?
TBH most of your arguments for RISC-V adoption seem to start from the position that it’s inevitable AND that competing platforms won’t also improve.
Why do you think they’d take RISC-V any more seriously than their previous attempts at ARM?
There are two fallacies to overcome here.
This time last year they were all over the RISC-V mailing lists, trying to convince everyone to drop the "C" extension from RVA23 because (basically confirmed by their employees) it was not easy to retrofit mildly variable length RISC-V instructions (2 bytes and 4 bytes) to the Aarch64 core they acquired from Nuvia.
At the same time, Qualcomm proposed a new RISC-V extension that was pretty much ARMv8-lite.
The proposed extension was actually not bad, and could very reasonably be adopted.
Dropping "C" overnight and thus making all existing Linux software incompatible is completely out of the question. RISC-V will eventually need a deprecation policy and procedure -- and the "C" extension could potentially be replaced by something else -- but you wouldn't find anyone who thinks the deprecated-but-supported period should be less than 10 years.
So they'd have to support both "C" and its replacement anyway.
Qualcomm tried to make a case that decoding two instruction widths is too hard to do in a very wide (e.g. 8) instruction decoder. Everyone else working on designs in that space ... SiFive, Rivos, Ventana, Tenstorrent ... said "nah, it didn't cause us any problems". Qualcomm jumped on a "we're listening, tell us more" from Rivos as being support for dropping "C" .. and were very firmly corrected on that.
For general purpose Linux, I agree. But if someone makes Android devices and maintains that for RISC-V… that's basically a closed, malleable ecosystem where you can just say "f it, set this compiler option everywhere".
But also, yes, another commenter pointed out C brings some power savings, which you'd presumably want on your Android device…
But what they wanted to do was strip the "C" extension out of the RVA23 profile, which is (will be) used for Linux too, as a compatible successor to RVA22 and RVA20, both of which include the "C" extension.
If Qualcomm wants to sponsor a different, new, profile series ... RVQ23, say ... for Android then I don't have a problem with that. Or they can just go ahead and do it themselves, without RISC-V International involvement.
But what they wanted to do was strip the "C" extension out of the RVA23 profile, which is (will be) used for Linux too, as a compatible successor to RVA22 and RVA20, both of which include the "C" extension.
If Qualcomm wants to sponsor a different, new, profile series ... RVQ23, say ... for Android then I don't have a problem with that. Or they can just go ahead and do it themselves, without RISC-V International involvement.
Android was never really Linux though.
Cursory research will yield that this was a technicality with no weight in Google's strong commitment to RISC-V Android support.
Despite your personal feelings about their motivation, these sites were factually correctly relaying what happened to the code, and they went out of their way to say exactly what Google said and respected Google's claim that they remain committed, with 0 qualms.
I find it extremely discomfiting that you are so focused on how the news makes you feel that you're casting aspersions on the people you heard the news from, and ignoring what I'm saying on a completely different matter, because you're keyword matching
I'm even more discomfited that you're being this obstinate about the completely off-topic need for us all to respect Google's strong off-topic statement of support[2] over the fact they removed all the code for it
[1] "Since these patches remove RISC-V kernel support, RISC-V kernel build support, and RISC-V emulator support, any companies looking to compile a RISC-V build of Android right now would need to create and maintain their own fork of Linux with the requisite ACK and RISC-V patches."
[2] "Android will continue to support RISC-V. Due to the rapid rate of iteration, we are not ready to provide a single supported image for all vendors. This particular series of patches removes RISC-V support from the Android Generic Kernel Image (GKI)."
And the mailing list is pretty active too: https://lists.riscv.org/g/sig-android/topics?sidebar=true
I especially don't understand offering code that predates the removal from tree and hasn't been touched since. Or, a mailing list, where we click on the second link and see a Google employee saying on October 10th "there isn't an Android riscv64 ABI yet either, so it would be hard to have [verify Android runs properly on RISC-V] before an ABI :-)"
That's straight from the horses mouth. There's no ABI for RISC-V. Unless you've discovered something truly novel that you left out, you're not compiling C that'll run on RISC-V if it makes any system calls.
I assume there's some psychology thing going on where my 110% correct claim that it doesn't run on RISC-V today is transmutated to "lol risc-v doesn't matter and Android has 0 plans"
I thoroughly believe Android will fully support RISC-V sooner rather than later.
It's certainly more than just disabling a build type - it's actually removing a decent bit of configuration options and even TODO comments. Then again, it's not actually removing anything particularly significant, and even has a comment of "BTW, this has nothing to do with kernel build, but only related to CC rules. Do we still want to delete this?". Presumably easy to revert later, and might even just be a revert itself.
It's not that hard to design a wide decoder that can decode mixed 2-byte and 4-byte instructions from a buffer of 32 or 64 bytes in a clock cycle. I've come up with the basic schema for it and written about it here and on Reddit a number of times. Yeah, it's a little harder than for pure fixed-width Arm64, but it is massively massively easier than for amd64.
Not that anyone is going that wide at the moment. SiFive's P870 fetched 36 bytes/cycle from L1 icache, but decodes a maximum of 6 instructions from it. Ventana's Veyron v2 decodes 16 bytes per clock cycle into 4-8 instructions (average about 6 on random code).
For those who haven't read the details of the RISC-V ISA: the first two bits of every instruction tell the decoder whether it's a 16-bit or a 32-bit instruction. It's always in that same fixed place, there's no need to look at any other bit in the instruction. Decoding the length of a x86-64 instruction is much more complicated.
Note that ARMv7 uses a similar scheme with two instruction lengths, but using The first 4 bits from each 2-byte parcel to determine the instruction length. It's quite complex, but the end result is 7/8 (56k) 2-byte instructions are possible and 1/8 (512 million) 4-byte instructions.
IBM 360 in 1964 thru Z-System today also uses a 2-bit scheme to choose between 2-byte instructions with 00 meaning 2-bytes (16k instructions available), 01 or 10 meaning 4-bytes (2 billion instructions available), and 11 meaning 6-bytes (64 terra instructions available).
To increase the number of 16-bit instructions. Of the four possible combinations of these two bits, one indicates a 32-bit or longer instruction, while the other three are used for 16-bit instructions.
> Do they plan to support other instruction lengths in the future?
They do. Of the eight possible combinations for the next three bits after these two, one of them indicates that the instruction is longer than 32 bits. But processors which do not know any instruction longer than 32 bits do not need to care about that; these longer instructions can be naturally treated as if they were an unknown 32-bit instruction.
https://lists.riscv.org/g/tech-profiles/attachment/332/0/cod...
If anything, ARM is the plan B that they'll likely end up abandoning.
I think a lot of RISC-V advocates are perhaps a little too over eager in their perception of the landscape.
I know of no boards that have application processors and yet not implement SBI. Furthermore, everybody seems to be using the opensbi implementation.
ARM and RISC-V are not the same.
So what happens to the Raspberry Pi?
Edit: OK, following the discussion now. Nothing in the short term, potentially longer term.
That is what I get for posting tired.
So nothing will happen to the Pi (Arm also has a a minority stake in Raspberry Pi)
[0] https://investors.broadcom.com/news-releases/news-release-de...
ARM really shouldn't pursue an aggressive posture with lines outside iOS or Win11 ecosystems. The leverage won't hold a position already fractured off a legacy market. =3
The Windows ARM port is going to take even longer, I doubt that Microsoft has that working at anything beyond research stage, if that.
Getting the RISC-V ecosystem up to par with ARM is going to take years.
If you want to spin this in RISC-Vs favor, then yes, forcing a company like Qualcomm to switch would speed things up, but it might also give them a bit of a stranglehold on the platform, in the sense that they'd be the dominant platform, with all of their own customisations.
If you are running in a posix environment, than porting a build is measured in days with a working bootstrap compiler. RISC-V already has the full gcc and OS available for deployment.
We also purchase several vendors ARM products for deployment. Note, there was a time in history, when purchasing even a few million in chips would open custom silicon options.
Given how glitched/proprietary/nondeterministic ARM ops are outside the core compatibility set, it is amazing it was as popular as the current market demonstrates.
Engineers solve problems, and ARM corporation has made themselves a problem. =3
Depends what you mean by performance (most vendors ARM accelerated features are never used for compatibility reasons), as upping the core count with simpler architecture is a fair trade on wafer space.
i.e. if ARM is using anti-trust tactics to extort more revenue, that budget is 100% fungible with extra resources. Note, silicon is usually much cheaper than IP licenses.
One can ask polity to explain things without being rude to people. Have a wonderful day =3
i.e. create a single open-chip solution for mid-tier mobile communication platforms.
They won't do this due to their cellular chip line interests. However, even if it just ran a bare bones Linux OS... it would open up entire markets. =3
Even with the recent silicon defects... People will tolerate the garbage as they want the NVIDIA+Intel performance.
Architecturally speaking, there were better options available... just never the equivalent price over performance of consumer grade hardware. =)
2. I am fairly sure a competitive RISC-V CPU is not days or weeks but years away.
And chasing a moving target fueled by the largest technology companies on the planet.
1. The RISC-V design standard fragmentation issue has been addressed.
2. A reasonable mobile class level SoC will be available for integration after any large production run of the chips.
If ARM forces #2 out of silliness, than it also accelerates #1 in the market.
In general, there is plenty of use-cases even if a chip is not cutting edge. =3
This is why they are subsidized for tens of billions of dollars by countries all over the world.
My point is that the commercial lawsuits between Qualcomm and ARM are just one part of a larger geopolitical issue- x86 lost the mobile market 20 years ago- the only reason Intel is surviving is national security- they could have been bankrupt had they not been propped up by a pre-emptive Defense Production Act. Consumers benefit because now they have the choice between more ARM software and x86 products, but I think that is just a short term benefit. Eventually the architecture cuts off support for old software, such as x86-32 bit, and now with X86S, they are only supporting 64 bit. So in the long term, it's better to have options. WINE was developed because of a fear of repeating the Irish Potato Famine ( a mono culture: https://gitlab.winehq.org/wine/wine/-/wikis/Importance-of-Wi...), in economic terms. In other words, just because Intel might not want to sell 32 bit chips anymore, doesn't mean others might not want/need to use some application that only exists on one platform (and all the engineers retired- with lost code/unported code).
There's an AI bubble: https://www.marketwatch.com/story/the-ai-bubble-is-looking-w... When a number of companies get investments and start to produce chips that add little extra value- slightly faster chips in lower power with 10 different architectures than run windows 11, then there is less justification to continue investing in companies that do not produce interesting new hardware because the end result is that they are being shaped by windows 11, rather than a unique feature- automotive efficiency for in car apps, sure, but laptops that run Oryon that are hard to boot linux aren't any more interesting than an x86S processor that can only boot linux 6.8 etc.