Discussion RISC V Latest Developments Discussion [No Politics]

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,747
6,598
136
Some background on my experience with RISC V...
Five years ago, we were developing a CI/CD pipeline for arm64 SoC in some cloud and we add tests to execute the binaries in there as well.
We actually used some real HW instances using an ARM server chip of that era, unfortunately the vendor quickly dumped us, exited the market and leaving us with some amount of frustration.
We shifted work to Qemu which turns out to be as good as the actual chips themselves, but the emulation is buggy and slow and in the end we end up with qemu-user-static docker images which work quite well for us. We were running arm64 ubuntu cloud images of the time before moving on to docker multi arch qemu images.

Lately, we were approached by many vendors now with upcoming RISC-V chips and out of curiosity I revisited the topic above.
To my pleasant surprise, running RISC-V Qemu is smooth as butter. Emulation is fast, and images from Debian, Ubuntu, Fedora are available out of the box.
I was running ubuntu cloud images problem free. Granted it was headless but I guess with the likes of Imagination Tech offering up their IP for integration, it is only a matter of time.

What is even more interesting is that Yocto/Open Embedded already have a meta layer for RISC-V and apparently T Head already got the kernel packages and manifest for Android 10 working with RISC-V.
Very very impressive for a CPU in such a short span of time. What's more, I see active LLVM, GCC and Kernel development happening.

From latest conferences I saw this slide, I can't help but think that it looks like they are eating somebody's lunch starting from MCUs and moving to Application Processors.
1652093521458.png

And based on many developments around the world, this trend seems to be accelerating greatly.
Many high profile national and multi national (e.g. EU's EPI ) projects with RISC V are popping up left and right.
Intel is now a premium member of the consortium, with the likes of Google, Alibaba, Huawei etc..
NVDA and soon AMD seems to be doing RISC-V in their GPUs. Xilinx, Infineon, Siemens, Microchip, ST, AD, Renesas etc., already having products in the pipe or already launched.
It will be a matter of time before all these companies start replacing their proprietary Arch with something from RISC V. Tools support, compiler, debugger, OS etc., are taken care by the community.
Interesting as well is that there are lots of performant implementation of RISC V in github as well, XuanTie C910 from T Head/Alibaba, SWerV from WD, and many more.
Embedded Industry already replaced a ton of traditional MCUs with RISC V ones. AI tailored CPUs from Tenstorrent's Jim Keller also seems to be in the spotlight.

Most importantly a bunch of specs got ratified end of last year, mainly accelerated by developments around the world. Interesting times.
 
Jul 27, 2020
19,613
13,476
146
I know one person there.
They need a non-engineer there to help them think outside the box. Like a Steve Jobs type of personality. They will either get so annoyed that they quit or they will go, ahhhh. I wish I had thought of that!

And I nominate myself as that type of non-engineer personality. I only do remote work and I communicate by Whatsapp (though I will join Teams meetings now and then).
 

camel-cdr

Junior Member
Feb 23, 2024
20
65
51
Why are they using Spec2006, instead of the more relevant Spec2017?
I think it's because that's what the other RISC-V vendors use, as soon as one of them switches to SPEC2017 the others will likely follow.
Keep in mind that Tenstorrent for Ascalon and SiFive for the P870 both report >18 SPECint2006/GHz, but tenstorrent has 8 wide, while P870 6 wide decode (and Ascalon has 256 VLEN vs 128 VLEN on P870).

Also note that while the website says >12 SPECint2006/GHz for the P670 their slide say it's >12.6/GHz, give that the website says the P870 is a 50% uplift, that would result in 18.9/GHz for the P870. Now obviously there could be something lost in rounding, but I think Ascalon is very likely to be above 19/GHz in the end.
 

Nothingness

Diamond Member
Jul 3, 2013
3,029
1,971
136

NostaSeronx

Diamond Member
Sep 18, 2011
3,704
1,230
136
I wonder why some RISC-V houses insist on using SPEC 2006. That doesn't inspire confidence when all other players are using SPEC 2017 (and next SPEC shouldn't be too far away).
Under RISE the SPEC2017 benchmark is not fully optimized with GCC/LLVM. So, assume that they will all move to SPEC2017. Once generic RVA23 optimization is cleared by RISE.
 
  • Like
Reactions: Nothingness

Doug S

Platinum Member
Feb 8, 2020
2,700
4,581
136
Can't say. SPEC made a call for submission in 2022: https://www.spec.org/cpuv8/

SPEC began its search for its 6th benchmark in late 2008 with submissions ending in 2010, but we didn't see SPEC2017 until June 2017.

However since this is a call for the 8th generation, it isn't clear whether they consider SPEC2017 to be their 6th or 7th generation, or what happened to make it take so many years to release SPEC2017 after beginning the call for submissions in 2008.

At any rate, I wouldn't hold my breath for an updated SPEC benchmark. Apple Silicon 'M' SoCs might be in the double digits if the call for submissions in 2008 is anything to go by!
 

DisEnchantment

Golden Member
Mar 3, 2017
1,747
6,598
136
XiangShan at HC24


KMH using RVA23 at 3GHz with "7nm" tier . Fairly modern feature set.
1724831933256.png

Targets acceptable performance
1724832103236.png

I wondered where is this 5nm taped out
1724832270457.png



Their roadmap is very aggressive, yearly tapeout of new versions
1724832329423.png

KMH overview

1724832489023.png

Quite awesome for a university to run a program that can deliver such performance yearly, with two dev teams in parallel. And all of this in open source for all to see.
Arguably the highest performant open source CPU implementation.
Now we need to see the KMH in the wild. It looks like some struggles to get the chip from RTL to tapeout on their 7nm but they still have 4 months left in the year.
 
Last edited:

DZero

Member
Jun 20, 2024
85
47
51
Saw this:


I am thinking... seeing that a Quad core at 1.5 Ghz was capable of run Witcher 3 at 15 FPS being unoptimized, made me think... what if we see an hexa core with decent GPU running at 2.0 Ghz with optimized games...

Why I told this? I know that might be hella expensive, but for a long term job would be ideal for a certain console maker that would make a game unable to being pirated so easily as happens now.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,704
1,230
136
I am thinking... seeing that a Quad core at 1.5 Ghz was capable of run Witcher 3 at 15 FPS being unoptimized, made me think... what if we see an hexa core with decent GPU running at 2.0 Ghz with optimized games...

Why I told this? I know that might be hella expensive, but for a long term job would be ideal for a certain console maker that would make a game unable to being pirated so easily as happens now.
SOPHGO Mango = SG2042 which is a 64-core @ 2 GHz.
"The Pioneer features the SG2042 processor, which boasts 64 cores with a clock frequency of 2 GHz."

C910/C920 = 3-wide decode, 2 ALUs, 2 FP/Vec, 1 BR, 1 LD/ST AGU

SG2380 should have 16x P670 which should wreck the above in gaming.
 

gdansk

Platinum Member
Feb 8, 2011
2,836
4,218
136
Well... Nintendo might have a golden chance to develop an in house chip capable of not being pirate-able easily.
I don't think ISA is a form of security. Maybe a few special instructions and a platform security processor. But these exist for every ISA.
What Microsoft has done with the Xbox operating system (and only having irrelevant games) nearly solved piracy.
 

soresu

Diamond Member
Dec 19, 2014
3,190
2,463
136
I don't think ISA is a form of security. Maybe a few special instructions and a platform security processor. But these exist for every ISA.
What Microsoft has done with the Xbox operating system (and only having irrelevant games) nearly solved piracy.
IIRC ARM were working on something security related called CHERI that may be adapted for other ISA's like RISC-V.
 

Nothingness

Diamond Member
Jul 3, 2013
3,029
1,971
136
Which ones are those and how has ARM corrected them? Does x86 have them?
I think I already made lists about the shortcomings.

For instance the lack of register offset and shift in addressing modes. Arm and x86 have had that since the beginning as it maches pointer access used in HLL.

If that's found to be too time critical, then split it. But when you don't have it you're bound to emit several instructions to get the same behavior which kills code density and forces you to fuse instructions to get to higher performance levels. That is such a failure that some companies have started adding that as a private extension, and IIRC even R-V has an extension to partially alleviate the problem.

The two points Eric highlight, I don't think I ever talked about them, in particular the vector extension. But what he says is really funny: on one hand you have an architecture that does as much as possible not to require any split, and then you have an extension that can't be done without splitting all over the place.

I don't agree with Eric about SVE. But I won't comment about that point beyond saying that obviously if you target a fixed vector length you're obviously at least as fast as the same instructions targetting variable VL. One of the main points of SVE has always been to ease autovectorization.

As far as predicated instructions not playing well with branch prediction I'm not sure what he means.