Top
Best
New

Posted by bhouston 10/23/2024

RISC-V is currently slow compared to modern CPUs(benhouston3d.com)
110 points | 120 commentspage 2
PreInternet01 10/23/2024|
Nah, 'RISC-V is slow' is not exactly an uncommon sentiment: see, e.g., https://news.ycombinator.com/item?id=41920766

Not surprising, since the few physical RISC-V implementations that are available under-perform a decade-old Raspi by a significant margin, and that platform is not a rocket-ship to begin with.

dagmx 10/23/2024||
You linked to the authors own comment fwiw
PreInternet01 10/23/2024||
Oops. Well, since recursive acronyms are widely-accepted in our industry as well, I'll just leave that one up.
tredre3 10/23/2024||
You are out of date if you think there's no currently available commercial risc-v cores that can outperform the decade old raspberry pi 1 B+. Frankly, a 30 years old pentium 2 could outperform the first raspberry pi. It's a very low bar. And risc-v has crossed it many many years ago.
rwmj 10/23/2024||
This is missing the two main server vendors who are taping out at the moment, Rivos and Ventana. Rivos at least are targeting the highest end performance.
classichasclass 10/23/2024||
Yes, but "taping out" is still a ways from "actually exists and you can bench it."
rwmj 10/23/2024||
The claim in the article is that RISC-V is somehow inherently unable to compete with Arm because of all sorts of missing architectural features. Yet there are server vendors who have those architectural features already and will make chips available fairly soon. The claim in the article is just wrong.
ta988 10/23/2024||
are there benchmarks?
rwmj 10/23/2024||
If you sign an NDA with them, I guess. Eventually you'll be able to buy the chips & benchmark them yourself.
kergonath 10/23/2024||
So, eventually we’ll be able to say that these chips are actually available. But right now, they are not proof of anything.
amelius 10/23/2024||
> RISC-V implementations in the wild lack advanced features that modern CPUs rely on for speed, including sophisticated pipelining mechanisms, out-of-order execution capabilities, advanced branch prediction, and multi-tiered cache hierarchies. Most commercial RISC-V chips remain in-order processors, meaning they execute instructions sequentially rather than optimizing their order for performance. This architectural simplicity creates a fundamental performance ceiling that's difficult to overcome without significant architectural changes.

This is a bit surprising given that all these techniques have been in computer architecture textbooks since at least the 90s.

undersuit 10/23/2024||
You don't need these tricks, you can always increase clocks and speed up memory interfaces. It was a lot harder for Intel to widen the memory interface of the P5 or increase the clocks sufficiently so they made it superscalar.

RISC-V gets to take advantage of being produced in 2024 and absorbing all the clock speed and transistor advantages we get for free today because of 6 decades of transistor production.

amelius 10/23/2024||
But how do you explain the 25x performance gap then?
undersuit 10/23/2024|||
RISC-V hasn't maxed out. A new 2.4Ghz clockspeed at 7nm(dev board actually 1.4Ghz) when my old Ryzen 7 5800X at 7nm gets 4.5Ghz. https://www.theregister.com/2024/04/09/sifive_riscv_hifive/
amelius 10/23/2024||
But increasing the clock speed might give you a 5x speed improvement at most.
undersuit 10/24/2024||
That's not the only thing you can do.
phkahler 10/23/2024|||
It costs a lot of money to develop an advanced CPU and deploy on an advanced node. You need a market for the chips to justify that so it's a bit of chicken or egg problem. The gap is slowly closing and I'm looking forward to a RISC-V linux laptop in the next few years.
librasteve 10/23/2024||
i would guess that over 50% of the design investment in modern OOO CPUs is in branch prediction, cache tuning and so on … with multiple speed /cost points available to fine tune and balance prefetch, decode, instruction unit, blah blah to max out benchmark scores (the bible according to Hennessy et al). i would be more interested to see how fast you can make the literally 1000s of “snitch” cores go with the right software.
jauntywundrkind 10/23/2024||
The timing of this post coming mere hours after industry giant Microchip introduces a monster of a chip is relishable:

> The integrated 240Gb/s [Ed: 30GB/s] TSN-enabled Ethernet switch is far from the chips' only feature: the PIC64HX has no fewer than eight 64-bit SiFive Intelligence X280 RISC-V cores,

https://www.hackster.io/news/microchip-unveils-the-high-perf...

Actual speed of those X280's is TBD, but this seems like a huge bump in what's on the risc-v market.

Most of the chips we've been looking at are the open sourced C906, a not particularly fancy early core, open sourced in 2021. It wasn't even the highest offering in that release! https://riscv.org/news/2021/10/alibaba-open-sources-four-ris...

Google talked about using the X280 two years ago, and what they were able to get from it's vector unit, https://www.sifive.com/blog/sifive-intelligence-x280-as-ai-c...

camel-cdr 10/23/2024|
Keep in mind that the 8 cores run at 1GHz, and have a simple simple scalar exexution, from the product page:

- 5.75 CoreMarks/MHz

- 3.25 DMIPS/MHz

- 4.6 SpecINT2k6/GHz

This SOC is mostly interesting for the 512 wide (probably single 512 vector issue) vector extension support.

If the price is right this might usable as a nice little low power number cruncher.

The lockstep mode is also interesting for industry uses.

While possible, it's certainly nothing you'd want to run a desktop on.

dagmx 10/23/2024||
I very much agree with you, and I think you’ll hit the hornets nest on this one unfortunately.

I’m all for more architectures but RISC-V has an absurd fandom behind it now. It feels like the fandom behind Linux on the home desktop or Vulkan (which makes sense given the open nature) in the way that they’re trying to manifest its success as reality by just saying that it’ll inevitably be used, while ignoring the hurdles in the way.

That’s not to say those aren’t successes when used , but I often feel the comments that put them on the pedestal don’t acknowledge the immense delta between today and their imagined future, and have very little interest when it’s pointed out.

For all three it comes down to Software Compatibility and experience of use. And the proponents seem to have a “the underlying tech is built and people will flock to it when they open their eyes”, but the first part in fixing the software compatibility gulf is acknowledging it exists and actioning that. FOSS isn’t alone here, Apple does the same mistake with desktop gaming and Microsoft with almost any physical product they release that’s not running windows.

I really do hope that more open platforms and technologies happen, but I feel like the people who unabashedly push them without acknowledging how it needs to happen are doing them a disservice.

giantrobot 10/23/2024||
> I’m all for more architectures but RISC-V has an absurd fandom behind it now.

I don't understand what the fandom thinks they're going to get out of RISC-V "winning". You're not going to be able to download a new CPU like you can a new Linux kernel. An open source CPU core is useless without a factory to manufacture it.

There's not going to be a GNU equivalent to semiconductor manufacturing. The baseline cost to build a factory is billions of dollars. You also can't just slap any design on any manufacturing node or chemistry. There's a lot work to get a chip design working on a particular node.

A CPU is a very small part of a functional computing device. It's some magical thinking to assume just because a device was built on an open source CPU core that somehow the overall system will somehow be more open.

Most of the stuff people bitch about being binary blobs will remain binary blobs will remain so even with a RISC-V CPU core. Anything with a radio will remain a black box for regulatory reasons, even if the baseband core is a RISC-V chip. GPUs will remain black boxes only accessible via their drivers' interface. Peripheral controllers covered by patent pools or branding licenses won't cease to be covered even if the controllers are RISC-V.

dagmx 10/23/2024|||
The win is ideological not focused on pragmatism imho. Which is fair, but imho one should be honest with themselves if that’s where they’re coming from.

If I can use that premise , then all the incongruity makes much more sense.

calf 10/24/2024||
Serious, nonrhetorical question, are you suggesting that Turing Award recipient Dave Patterson who is currently on the RISC-V board is somehow dishonest with their philosophy and approaching being taken here?
eternityforest 10/23/2024||||
Having only one architecture for almost all devices seems like it would be nice although probably not a super big advantage
Manabu-eo 10/23/2024||
Obligatory xkcd: https://xkcd.com/927/

I do recognize that as a royalty free and well supported architecture, very flexible with all the optional extensions, it do have a better shot than others at becoming the standard architecture. But the sheer amount of closed source software written for architectures that keep track of arithmetic flags that would need to be emulated is daunting.

panick21_ 10/23/2024|||
RISC-V isn't just the RISC-V standard. Its also a larger movement. What RISC-V critically changes is that open core designs can now openly be shared. This is something that wasn't the case before.

Universities and business can now cooperate much better better, people can work on a project in university and commercialize it far faster.

> You're not going to be able to download a new CPU like you can a new Linux kernel. An open source CPU core is useless without a factory to manufacture it.

A huge number of well designed FPGA cores can actually be download. Not so long ago good open FPGA cores weren't that common. Now there is a wealth of options. And you can get very high quality stuff like OpenTitan and use it yourself.

And designing your own CPU and letting it be manufactured more like a PCB isn't as crazy as it was. It used to cost many millions, now you can get it done for much less. 30 years ago hobbits designing their own boards and getting them within a few days wasn't a thing, now its common. Almost every hacker conference now has their own PCB per conference. This wasn't a thing not that long ago.

Google worked with SkyWater Technology to actually open source most of their process. You can use a fully open flow and order your own costume chips.

Contract manufacturing like that barley was a thing in the 80s, and now its the standard.

> A CPU is a very small part of a functional computing device. It's some magical thinking to assume just because a device was built on an open source CPU core that somehow the overall system will somehow be more open.

Nobody has this 'magical' thinking, that's a Straw man. Its about observing a longer term trend. Before RISC-V having even open-core wasn't much of a thing. And now that there are quality cores people can spend their time on other thing. Open implementation of different IPs are increasingly being designed. University research on new stuff being done by adding something to an open core.

Companies like Tenstorrent added a cool vector ISA to an Open Core. Chips Alliance is investing in tools like Verilator, and other needed things like TileLink interconnects.

Yes its not a guarantee that whoever uses RISC-V makes other things open, but at the very least it doesn't hurt. And the other side of the coin is that it makes it very EASY for people who want to make things open.

> GPUs will remain black boxes only accessible via their drivers' interface.

There are already early attempts at building RISC-V open GPUs, that you can interact with however you want. Sure your not gone get that from NVidea but that doesn't mean its not valuable.

> Peripheral controllers covered by patent pools or branding licenses won't cease to be covered even if the controllers are RISC-V.

Nobody is disagreeing with that, but its still the case that a lot of things can actually be open. A position that says 'its bad because its not absolute good' is dumb.

The point is that RISC-V existing and being successful is just one building block in the idea of information and technology being free and sharable, it moves cooperation and competition to a higher level.

It has lots of practical benefits that have already been demonstrated and RISC-V as a movement already had a huge impact on everything from the tooling to peripherals. Just look at RISC-V Foundation, Chip Alliance, CORE-V, Pulp Project and so on and so on.

> I don't understand what the fandom thinks they're going to get

A better world mostly, and many practical benefits along the way.

giantrobot 10/23/2024||
>> I don't understand what the fandom thinks they're going to get

> A better world mostly, and many practical benefits along the way.

That's the magical thinking. There's logical leaps required to get from the current status quo to the proposed future state. A CPU architecture and core design is orthogonal to almost every other market force in the industry.

Let me make sure I'm explicit since people tend to get very tribal when anyone says anything about their "team". I have no problem at all with RISC-V as an architecture. I do not care if my next phone or laptop has an ARM chip or RISC-V chip. As long as my laptop does laptop stuff and my phone does phone stuff the CPU instructions executing do not matter to me. I'm also writing zero assembler for any non-trivial personal or professional project. So long as compilers and toolchains exist for my system the ISA is an academic discussion for me. I have no problem with RISC-V existing or "winning".

In terms of the world being "better" with RISC-V, that's just a weird statement. The architecture doesn't offer anything actually new to the industry. There's nothing fundamental about the architecture that makes anything better. The ISA has implementation gotchas that make for problematic or complicated compiler implementations. Its extensible nature also provides a huge surface area for minor implementation specific incompatibilities. Two similar RISC-V chips may not be drop-in replacements for one another. So it's not like the overall RISC-V design is objectively better than any other ISA.

The open source nature of RISC-V is an academic improvement over closed core designs. I'm a random guy and I have the same access to most of the same compilers as Google, Amazon, or anyone else. I can compile Linux on any commodity computer I own. I don't need a clean room or expensive equipment to do software development or even deployment. If I write something you want to use the marginal cost of acquiring it for you is effectively zero. Open source software is unreasonably effective because of that trivial marginal cost of reproduction.

Unlike the Linux kernel you can't compile a CPU core and reboot and get some performance gain. You might be able to build a 4004[0] in your garage you're not going to be building a CPU you can drop into your laptop or phone. At least not one that would be able run at any reasonable speed.

Open source hardware is not bad. It just doesn't solve any of the very real problems of producing hardware. It doesn't obviate the challenges or costs of developing new process nodes or chemistries. It doesn't help the marginal costs of producing hardware. If you're just buying fully finished chips it's not like you're getting a discount because the manufacturer saved some money on the core design. They'll still charge whatever the market will bear and pocket the savings.

The idea that open hardware will make the world better does not seem like a supportable statement. You're not getting a discount on an Android phone because someone patched a bug in the Linux kernel in their spare time.

[0] https://spectrum.ieee.org/the-high-school-student-whos-build...

panick21_ 10/24/2024||
> The architecture doesn't offer anything actually new to the industry.

There is more to live then technical specification. The change in the license and the development pattern and the business model does actually matter.

I suggest you watch some talks by Krste Asanovic who created the RISC-V project, he explained exactly this point.

> Open source software is unreasonably effective because of that trivial marginal cost of reproduction.

Its open hardware isn't as good as open software. Yes we know. Sadly we don't have a universal 3d printer. But that doesn't mean its worthless.

> Unlike the Linux kernel you can't compile a CPU core and reboot and get some performance gain.

Have you never used an modern FPGA?

> It just doesn't solve any of the very real problems of producing hardware.

If you have a narrow point of you of only considering manufacturing but if you broaden your point of view and look at the whole value chain, it absolutely does. And you know who agrees with me, tons of companies who have invested in RISC-V and the ecosystem.

If you don't believe me, I suggest you watch this video from Google where they explain why they are doing what they are doing:

https://www.youtube.com/watch?v=EczW2IWdnOM

You can find videos like that from other companies, including hardware companies.

> It doesn't obviate the challenges or costs of developing new process nodes or chemistries.

I didn't know that producing hardware was the same as developing knew nodes. In your mind, the only thing that matters is the cost of new node development? Nothing else in the whole world matters to producing computer hardware?

> It doesn't help the marginal costs of producing hardware.

It helps the fixed cost, and the smaller your run is, the more important that is. And it actually does improve marginal cost in many cases, if you don't pay license fee anymore. Again, go and watch the talks by Krste, he explains a lot of other point that matters around this question and why RISC-V took off with so many companies, both producers and consumers.

As I pointed out, PCB went threw similar progression (and is still going). And this has been incredibly helpful to the whole industry.

> If you're just buying fully finished chips it's not like you're getting a discount because the manufacturer saved some money on the core design. They'll still charge whatever the market will bear and pocket the savings.

If there is a high quality core in the class you are looking for, and each manufacturer has access to that same IP, then competition will drive the IP value of that core to zero. That's basic economics. And that's exactly what groups like CORE-V are trying to do.

This exact same thing happened with software as well. It used to you would pay for things like a compiler. Because it was an important value add. But once there is an open source compiler you can't charge money for it anymore.

Funny how plenty of companies who buy a lot of chips, like industrial manufacturing companies have invested into the CORE-V project. Yet you claim it provides no value. Do these people just hate money? Or are they doing it out of the good of their own heart? Or do they understand something you missed? Consider watching presentation from Thales on that topic for example.

> The idea that open hardware will make the world better does not seem like a supportable statement. You're not getting a discount on an Android phone because someone patched a bug in the Linux kernel in their spare time.

Again, you seem to lack a basic understanding of economics. I am not getting a discount for a bug fixed in linux because the linux is already free. So the discount has already happened. You are literally missing the whole point of open source. The whole point is that bugs are getting fixed DESPITE ME NOT PAYING ANYTHING.

As long as all competitors have access to the same code, non of them can extract value from it but are still 'forced' to provide it.

Open cooperation has produced a system where the improvement cost are incredibly wide spread and the benefit are even wider spread. To the point where almost nobody can actually demand money for it. And thanks to this economic reality, we all benefit from this process.

The literal exact same process works for for hardware designs as well. Chips before going into manufacturing are literally just code and configuration. The value of that can be driven down. We are just not as far along and of course manufacturing has cost.

calf 10/24/2024||
Has any of the presenters discussed the relevance of Moore's Law to RISC-V essentially being an open standard for commodity hardware (I think Dave Patterson has argued for this, akin to USB or internet protocol standards)? As in, in the last decade (prior to LLMs, etc.) people thought hardware would be a maturing market because Moore's Law was flatlining, hence it made sense then to have an open ISA instead of ARM IP-based economics. I'm just wondering aloud here.
panick21_ 10/24/2024||
David Patterson and Jim Keller have talked about this primarily as far as I know.
calf 10/26/2024||
Thanks, I don't know about Jim Keller. Did they discuss Moore's Law specifically, or other relevant factors, or instead did they focus on the standardization argument for ISAs? edit: Ah, I see he's the "Moore's law is not dead" person.
panick21_ 10/29/2024||
Bryan Cantrill has as well, but not in relation to RISC-V (not directly at least).
wink 10/23/2024|||
> It feels like the fandom behind Linux on the home desktop

I don't agree with this comparison (or I am misjudging your intention) - 90% of the people I know who run any sort of linux deskopt (usually developers at work) only don't switch their home desktops because of games. I know we're a tiny majority (and I am typing this from a windows machine) - but it's nothing like 10x worse in objective terms (e.g. speed).

dagmx 10/23/2024||
But that’s precisely it. You don’t switch at home for games, others don’t for other compatibility reasons either. I’m not specifically talking about games but the whole user experience.

That’s not a knock against Linux. It’s great but it’s also disingenuous when people push it as the year of Linux on the home desktop.

talldayo 10/23/2024||
> I’m not specifically talking about games but the whole user experience.

If you are capable of using an iPad as a gaming device, I literally do not understand how you wouldn't be able to use a GNOME desktop to achieve literally the exact same outcome.

Am I wrong? Getting Steam to use Proton is literally one click in the Steam settings - using an app like Bottles just has you open exes like normal. This is no worse than the Crossover Wine support from the Mac days of yore, if not more streamlined and not fighting against system integrity protection. And your fucking settings app doesn't give you a notification pip for not logging in.

panick21_ 10/23/2024||
> just saying that it’ll inevitably be used, while ignoring the hurdles in the way.

That statement makes no sense. These things aren't in conflict, saying something is 'inevitable' doesn't mean you can't see the hurdles or that you are ignoring them. Its just born out of an understanding of what the hurdles are and how, in time they can be overcome.

> and have very little interest when it’s pointed out.

Maybe they aren't interested because they know its a long road and anytime there is any success its better to just be excited for a moment and not have somebody comes in with the 'but actually'. This is a totally normal social dynamic.

This concept is best addressed in The Big Lebowski, "Your not wrong your just an a*hole".

> For all three it comes down to Software Compatibility

And as an industry we have some understanding of how standards with open protocol work and how the situation improves. RISC-V isn't knew and we have some idea of how that process is going and how it is organized. Its reasonable to make assumptions about how this will continue.

We also have a large and strong open software community that is gone focus on these standards. Even for very large cooperation, maintaining its own standard is a huge pain in the ass.

There is a pretty good standards process with lots of people involved that has been making very good progress.

There is also real money and effort put into improving all the upstreams. The RISE project for example, brings together tons of major companies, universities, distros and so on. And this isn't just about RISC-V, lots of effort also put into tools, open designs and so on.

RISC-V went from no support to be comparable to long established ISA in a pretty short time. Its not unreasonable to project that forward.

> and experience of use

RISC-V has been adopted by a huge number of universities, its the default now anytime somebody upgrades those courses. Anybody doing things on a FPGA, is almost certainty gone use some open RISC-V core. Some FPGA manufactures are even pushing RISC-V as their example cores. RISC-V is also designed to be easy to learn and get into. It already has a lot of adoption all across the industry. Far more and far faster then any other ISA ever.

Experience is gained by people working on projects, and there are lots of them.

> Apple does the same mistake with desktop gaming and Microsoft with almost any physical product they release that’s not running windows.

That's different, because Apple just clearly doesn't care very much. That a very different situation.

> "the underlying tech is built and people will flock to it when they open their eyes"

No they are only observing that large cooperations all over the world are already moving in that direction. That China and India see the advantage as well. That there are major chip makers who have made it the core of their business. That Europe is making a major investment. That there are a whole boatload of AI companies are adopting it. A huge number of people have already seen it and its has been growing fast for a while now and that growth can be measured in a number of ways.

> I really do hope that more open platforms and technologies happen, but I feel like the people who unabashedly push them without acknowledging how it needs to happen are doing them a disservice.

I feel like those people mostly exist in your imagination. When speaking in support of something you are not required to follow it up with a 50-page development plan. You can just like something and be optimistic and that is totally fine, it doesn't mean that person is naive or unaware.

If you have an actual counter argument why optimism isn't warranted, then you can say that. Some people believe RISC-V is badly designed for example. Some people believe fragmentation will kill it. But just to outright state 'people who are optimistic are a problem' is a silly position.

dagmx 10/23/2024||
And in this long rant, you basically just did every single thing that I mentioned.

You focused on the academic and niche use cases to try and counter an argument about standard user use cases. Something Linux hasn’t solved precisely because of people like yourself who keep talking about the things a regular user does not care about.

It’s also cute that in doing this, you have to resort to name calling to feel a sense of intellectual superiority.

panick21_ 10/23/2024||
Wow, what a attitude to have. Sorry, if have this totally crazy believe that 'academic', 'science', 'education' and many other 'niche' matter. Sorry that it matters to me that my friends made a PCB with RISC-V chip on it and we had fun with it. My bad. I'm sorry I have less then zero time with people with an attitude like yours. Have fun eating at McDonald's.

And btw its fucking hilarious to call the OS that literally runs on almost every device in the world 'niche'.

> It’s also cute that in doing this, you have to resort to name calling to feel a sense of intellectual superiority.

I was not name calling, I was explaining a concept.

And given this second comment of yours, now I think the concept actually applies.

dagmx 10/23/2024||
With all due respect, nobody but you cares that you had fun doing it, and nor should they. Nobody is saying you can't. Literally, it's not even part of the discussion.

None of that is relevant to the discussion point at hand which is mass adoption of a new arch and what stands in the way.

And again, the intended derision of saying "Have fun eating at McDonalds"? WTF is that even meant to mean. I'll stop responding to you because I think you are incredibly hostile.

rkagerer 10/23/2024||
The article mentions out-of-order execution capabilities.

How desirable is this vs. complexity introduced, and can similar benefits more cleanly be achieved at the compiler or software architecture level?

cmpxchg8b 10/23/2024||
Ask the Itanium team how putting faith in software optimizations to overcome hardware issues saved their bacon.
Arnavion 10/23/2024||
The point of out-of-order execution is effectively to be able to run parallel computations that don't use the same underlying hardware, eg you can run an integer operation and a floating-point operation in parallel because they use different functional units, or even two integer operations in parallel if you have redundant integer functional units.

Branch prediction / speculative execution then takes that further by making an assumption of which arm of the branch will be taken and executing the instructions in that arm too. When the instruction for the branch completes fully, and the prediction turned out to be correct, the processor continues as normal. If the prediction turned out to be wrong, the processor throws away the pending changes from the incorrectly speculated instructions and starts again from the correct branch arm.

An in-order CPU waits for each instruction to finish before it processes the next one. There's no way to work around that in software. At best the hardware can insert latches into the ALU etc so that instructions take the smallest number of clock cycles that they need (eg multiply takes six cycles but add takes only one).

The alternative is VLIW ISAs that rely on the compiler to encode multiple instructions that can execute in parallel into a single instruction. That didn't work out well in the past for general computing as demonstrated by Itanium etc, though it's still used for restricted domains like DSPs.

talldayo 10/23/2024||
> Today, ARM's success has contributed to Intel's market plateau

I don't buy this?

ARM is applied in embedded devices or mobile hardware, neither of which Intel really "owned" in the first place. x86 is still the first-class datacenter option and I frankly don't see ARM taking it over on the desktop or server. The people with architectural licenses aren't interested in competing, which leaves Apple as really the only flexible customer, and we all know Apple was going to replace Intel eventually. So... Intel persists. And with DXVK running fine on RISC-V, I'm well within my right to say it's faster than I think: https://youtu.be/5UMUEM0gd34

Honestly AMD has contributed more to Intel's demise than ARM ever did. When I read blog posts like this I really wonder how proximal the author actually is to the industry - it's an assertion that lacks evidence.

xyst 10/23/2024||
The thinly veiled threats of switching to RISC-V are unfounded and a mere bluff. ARM can get Qualcomm to bend so easily, just wonder if they have the stomach and wallet to do so.
the_jeremy 10/23/2024|
> I haven't see any single threaded scores about 150 and no multi-threaded scores higher than 1500.

s/about/above, s/see/seen

bhouston 10/23/2024|
Thx! fixed.
More comments...