Discussion Speculation: The Rise of RISC-V

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136
RISC-V gets mentioned more these days, as a contingency if ARM gets unruly, with NVidia in the process of buying them.

But what we seldom see are interesting RISC-V implementations. But there are some extremely impressive perf/watt claims about a new Risc-V design from Micro Magic:


coremark.effiency-1440x1080.png



So are we at the beginning of the rise of RISC-V?
 
  • Like
Reactions: Tlh97 and Vattila

Hitman928

Diamond Member
Apr 15, 2012
5,244
7,793
136
Maybe, but this test doesn't tell us much. The benchmark they use is tiny, designed to scale down to very basic microcontrollers (like 8 bit controllers) and we have no info on what the Micro Magic processor specs are. If AMD/Intel/ARM wanted to make a tiny CPU where the only metric they cared about was efficiency (giving up actual performance and functionality to get it), this chart would look very different and the Micro Magic processor wouldn't look nearly as good. But the CPUs they are using as comparison are a completely different class of processors.
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
Andrei already mentioned that their Apple estimations are orders of magnitude off (from another article, but both speak of that Micro Magic chip):

I'm not sure how Andy Huang got the M1 to only score 10,000 CoreMarks, when it actually gives 30,000 in multiple tests, and I'm not sure why they'd estimate 15W instead of actually measuring the M1's power consumption. Seems like a very stupid off-the-cuff remark by Huang that is remarkably simple to disprove.

The Arstechnica chart in the first post seems accurate, as we'd expect, with the M1 getting 10974 CoreMarks/Watt - which is similar to Andrei's findings of ~9558.
 

Hitman928

Diamond Member
Apr 15, 2012
5,244
7,793
136
Andrei already mentioned that their Apple estimations are orders of magnitude off (from another article, but both speak of that Micro Magic chip):


Just to be clear, Andre is referring to Micro Magic's Apple estimations, not what is shown in the Ars article, what is shown in the Ars article is fairly accurate for the M1, at least when the big cores are engaged.
 

Hitman928

Diamond Member
Apr 15, 2012
5,244
7,793
136
I'm not sure how Andy Huang got the M1 to only score 10,000 CoreMarks, when it actually gives 30,000 in multiple tests, and I'm not sure why they'd estimate 15W instead of actually measuring the M1's power consumption. Seems like a very stupid off-the-cuff remark by Huang that is remarkably simple to disprove.

The Arstechnica chart in the first post seems accurate, as we'd expect, with the M1 getting 10974 CoreMarks/Watt - which is similar to Andrei's findings of ~9558.

The whole thing is a publicity stunt with very little real info or relevance.
 

gdansk

Platinum Member
Feb 8, 2011
2,078
2,559
136
It seems to me that if a easy to decode ISA actually matters then the designers should prefer simple, extensible RISC-V over ARMv8 for ease of design and cost reasons. On the other hand, how good are the RISC-V backends on mainstream compilers? There are so many extensions they must be targeting a specific set of extensions like RV64GC.
 

Doug S

Platinum Member
Feb 8, 2020
2,252
3,483
136
I'm not sure how Andy Huang got the M1 to only score 10,000 CoreMarks, when it actually gives 30,000 in multiple tests, and I'm not sure why they'd estimate 15W instead of actually measuring the M1's power consumption. Seems like a very stupid off-the-cuff remark by Huang that is remarkably simple to disprove.

The Arstechnica chart in the first post seems accurate, as we'd expect, with the M1 getting 10974 CoreMarks/Watt - which is similar to Andrei's findings of ~9558.

Personally when I see such wildly unfavorable claims I assume the person making them is selling snake oil. If the product he was selling (if it ever even sees the light of day) was good, he'd be able to make honest comparisons.

This isn't something slightly shady like "well let's use the Macbook Air and run it in a hot room and run the test a bunch of times in a row before taking our measurement so we get some throttling and hope we knock down the M1's numbers by 30-40%". This is outright lying by a couple orders of magnitude. So I will assume everything else he says is a lie equally as big, and will predict this product never sees the light of day. If I had to guess, they're hoping someone will be fooled and buy/bail them out.

Even if this was honest, the fact he's measuring a product that claims to scale to 5 GHz at 3 GHz is disingenuous. Every CPU has much better perf/watt running at a 40% lower clock rate. So measure the M1 at 2 GHz and see how it does then...
 

Hitman928

Diamond Member
Apr 15, 2012
5,244
7,793
136
Personally when I see such wildly unfavorable claims I assume the person making them is selling snake oil. If the product he was selling (if it ever even sees the light of day) was good, he'd be able to make honest comparisons.

This isn't something slightly shady like "well let's use the Macbook Air and run it in a hot room and run the test a bunch of times in a row before taking our measurement so we get some throttling and hope we knock down the M1's numbers by 30-40%". This is outright lying by a couple orders of magnitude. So I will assume everything else he says is a lie equally as big, and will predict this product never sees the light of day. If I had to guess, they're hoping someone will be fooled and buy/bail them out.

Even if this was honest, the fact he's measuring a product that claims to scale to 5 GHz at 3 GHz is disingenuous. Every CPU has much better perf/watt running at a 40% lower clock rate. So measure the M1 at 2 GHz and see how it does then...

They're not even trying to sell this product from what I can tell, they're trying to license their IP, this was just all for show to try and get people to believe how good their IP is. I can't fault them too much I guess, it worked. I am very disappointed that the tech media just pretty much regurgitated it instead of criticizing the lack of real info, the bad data, the irrelevant comparisons. . . pretty much everything about it.
 

Doug S

Platinum Member
Feb 8, 2020
2,252
3,483
136
They're not even trying to sell this product from what I can tell, they're trying to license their IP, this was just all for show to try and get people to believe how good their IP is. I can't fault them too much I guess, it worked. I am very disappointed that the tech media just pretty much regurgitated it instead of criticizing the lack of real info, the bad data, the irrelevant comparisons. . . pretty much everything about it.

Well you may be disappointed, but I hope you're not surprised...
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136
Personally when I see such wildly unfavorable claims I assume the person making them is selling snake oil. If the product he was selling (if it ever even sees the light of day) was good, he'd be able to make honest comparisons.

Yes, I ignored the one sided claims from Micro Magic themselves.

The Arstechnica Story reports their own scores for Apple and AMD cores.

Though they are still relying on numbers supplied by Micro Magic for that side of the comparison.

There are a great many question marks, but this still could be something worth keeping an eye on.
 

name99

Senior member
Sep 11, 2010
404
303
136
RISC-V gets mentioned more these days, as a contingency if ARM gets unruly, with NVidia in the process of buying them.

But what we seldom see are interesting RISC-V implementations. But there are some extremely impressive perf/watt claims about a new Risc-V design from Micro Magic:


coremark.effiency-1440x1080.png



So are we at the beginning of the rise of RISC-V?
No.
If you rely on a BS benchmark, which you then cannot even calculate close to correctly you have UTTERLY squandered your credibility. (As have EE Times, not that they had much to begin with.)

Next question.
 

soresu

Platinum Member
Dec 19, 2014
2,657
1,858
136
RISC-V gets mentioned more these days, as a contingency if ARM gets unruly, with NVidia in the process of buying them.

But what we seldom see are interesting RISC-V implementations.
I'm still waiting on the more powerful Madras Shakti cores to show some sign of life.
 

SarahKerrigan

Senior member
Oct 12, 2014
360
513
136
EEtimes said:
“Using the EEMBC benchmark, we get 55,000 CoreMarks per Watt. The M1 chip is roughly the equivalent of 10,000 CoreMarks in EEMBC terms; divide this by eight cores and 15W total, and that is less than 100 CoreMarks per Watt.” Going on to make a comparison with Arm, he added, “The fastest Arm processor under EEMBC benchmarks is the Cortex-A9 (quad-core), with a figure of 22,343 CoreMarks. Divide this by four cores and 5W per core, and you get 1,112 CoreMarks per Watt.”

We're supposed to believe that a quad-core Cortex-A9 is over twice as fast as M1? And, for that matter, that a quad A9 is a 20W chip? Also, how do they get 100 CM/W from 10000 CM @ 15W?

What am I missing? I work in semi and I'm naturally really suspicious of this kind of claim...
 
  • Like
Reactions: scannall

DrMrLordX

Lifer
Apr 27, 2000
21,620
10,829
136
On the other hand, how good are the RISC-V backends on mainstream compilers? There are so many extensions they must be targeting a specific set of extensions like RV64GC.

Gotta wonder, eh? I am still waiting for someone to actually start selling a RISC-V CPU/SoC that supports most of/all the known extensions (with the exception of I think -P, which has been at least partially deprecated in favor of -V?).
 

soresu

Platinum Member
Dec 19, 2014
2,657
1,858
136
Gotta wonder, eh? I am still waiting for someone to actually start selling a RISC-V CPU/SoC that supports most of/all the known extensions (with the exception of I think -P, which has been at least partially deprecated in favor of -V?).
Aren't the major extensions still in pre v1 state?

This is something that makes me highly dubious of RISC V at the moment.

It's all well and good if they say that future increments of the extension standards will be backwards compatible, but that sounds like they are setting themselves up for a world of hurt and bloat.
 

DrMrLordX

Lifer
Apr 27, 2000
21,620
10,829
136
Aren't the major extensions still in pre v1 state?

This is something that makes me highly dubious of RISC V at the moment.

It's all well and good if they say that future increments of the extension standards will be backwards compatible, but that sounds like they are setting themselves up for a world of hurt and bloat.

I suspect the same. There are several extensions that are not "frozen" yet, and it is possible that the extensions will actually change over time, which means older hardware won't support them and newer hardware may not be backwards-compatible.

From a laptop/desktop/workstation/server point-of-view, it would make more sense if they finalized groups of extensions and then required people to support all of them (or as many as possible; 128-bit extensions are pretty niche) in order to have the RISC-V badge. Then when new extensions are needed, they can hold them together and update the RISC-V standard in groups of extensions. For microcontrollers that approach makes less sense, where it's nice to be able to trim off unneeded extensions to save on implementation costs.
 
  • Like
Reactions: soresu

DisEnchantment

Golden Member
Mar 3, 2017
1,601
5,780
136
Aren't the major extensions still in pre v1 state?

This is something that makes me highly dubious of RISC V at the moment.

It's all well and good if they say that future increments of the extension standards will be backwards compatible, but that sounds like they are setting themselves up for a world of hurt and bloat.

Irrespective of how that is happening, RISC-V is going from the bottom up
2 Billion SoCs is a good number.

Linux has a bunch of new commits from Huawei to support the architecture.

For Auto, SiFive and .... Renesas are doing RISC-V.
I hope at some point they drop the V850 and SuperH relic architecture.

Then Seagate even has a full open source RTL RISC-V implementation used in their latest SSD devices

Waiting for Infineon.
My contacts already said they have some designs, I am curious if they replacing also the Cypress Cortex-R designs with RISC-V.

Then MicroChip
They have some ACAP style FPGAs using RISC-V

I don't know when they will hold EmbeddedWorld this year, but I will look out for RISC-V when I go
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,221
136
IST/CAS has XiangShan Nanhu(South Lake) architecture lineup: https://github.com/OpenXiangShan/XiangShan
~1.3 GHz at TSMC 28nm (now-ish tapeout)
~2.0 GHz at SMIC 14nm (end of the year-ish tapeout)
XiangShan-RISC-V-architecture.png

Even though it is RV64GC, it looks pretty high performance for what it is. RV64GC is the general base specification for any implementation;
"Debian port uses RV64GC as the hardware baseline"
"Fedora/RISC-V, aims to provide a complete Fedora experience on the RISC-V (64 bit, RV64GC) architecture."
"Ubuntu provides the riscv64 architecture for the RISC-V platform since the release of Ubuntu 20.04 LTS."

So, I guess expect high-perf desktop between 2022-2024.
 
Last edited: