Discussion RISC V Latest Developments Discussion [No Politics]

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,777
6,791
136
Some background on my experience with RISC V...
Five years ago, we were developing a CI/CD pipeline for arm64 SoC in some cloud and we add tests to execute the binaries in there as well.
We actually used some real HW instances using an ARM server chip of that era, unfortunately the vendor quickly dumped us, exited the market and leaving us with some amount of frustration.
We shifted work to Qemu which turns out to be as good as the actual chips themselves, but the emulation is buggy and slow and in the end we end up with qemu-user-static docker images which work quite well for us. We were running arm64 ubuntu cloud images of the time before moving on to docker multi arch qemu images.

Lately, we were approached by many vendors now with upcoming RISC-V chips and out of curiosity I revisited the topic above.
To my pleasant surprise, running RISC-V Qemu is smooth as butter. Emulation is fast, and images from Debian, Ubuntu, Fedora are available out of the box.
I was running ubuntu cloud images problem free. Granted it was headless but I guess with the likes of Imagination Tech offering up their IP for integration, it is only a matter of time.

What is even more interesting is that Yocto/Open Embedded already have a meta layer for RISC-V and apparently T Head already got the kernel packages and manifest for Android 10 working with RISC-V.
Very very impressive for a CPU in such a short span of time. What's more, I see active LLVM, GCC and Kernel development happening.

From latest conferences I saw this slide, I can't help but think that it looks like they are eating somebody's lunch starting from MCUs and moving to Application Processors.
1652093521458.png

And based on many developments around the world, this trend seems to be accelerating greatly.
Many high profile national and multi national (e.g. EU's EPI ) projects with RISC V are popping up left and right.
Intel is now a premium member of the consortium, with the likes of Google, Alibaba, Huawei etc..
NVDA and soon AMD seems to be doing RISC-V in their GPUs. Xilinx, Infineon, Siemens, Microchip, ST, AD, Renesas etc., already having products in the pipe or already launched.
It will be a matter of time before all these companies start replacing their proprietary Arch with something from RISC V. Tools support, compiler, debugger, OS etc., are taken care by the community.
Interesting as well is that there are lots of performant implementation of RISC V in github as well, XuanTie C910 from T Head/Alibaba, SWerV from WD, and many more.
Embedded Industry already replaced a ton of traditional MCUs with RISC V ones. AI tailored CPUs from Tenstorrent's Jim Keller also seems to be in the spotlight.

Most importantly a bunch of specs got ratified end of last year, mainly accelerated by developments around the world. Interesting times.
 

naukkis

Golden Member
Jun 5, 2002
1,020
853
136
Wouldn't emulation across profiles enable a "sort of" compatibility, using something like Qemu or even aiding emulation of profiles in hardware?
It's upward compatible, lower feature levels are executable on higher levels. Other way around it's possible for kernel to trap nonsupported instructions and emulate those like arm is recommending for cpu that lack for example div - but it's that forward path which is important. Your code build for mimimal arch will be future proof and executable any cpu with at least same feature level support
 
  • Like
Reactions: igor_kavinski

naukkis

Golden Member
Jun 5, 2002
1,020
853
136
I also don't consider excessive simplicity as the way to go.
Keep it simple stupid is pretty valid design point. Complex things are hardware dependent and probably go wrong sooner or later. Good example is arm load register pair, which if used on later Apple cpus will tank performance. Without that instruction problem won't be possible at all.
 

DZero

Golden Member
Jun 20, 2024
1,630
634
96
Now with the development between ARM and Qualcomm and seeing that it might affect to the rest it left me think... what if Nintendo foreshadowing all of that, starts to make a BIG project to make an uArch based on RISC-V, which will work in all aspects in order to be competitive, but also allowing to be totally free to add and not add the specs they need.
While it will combat piracy with this, also they might get the prices in control and also they initially locks the accesories sales for them.

Yeah, it would be a fantasy, but seeing the current situation, even thinking on a less problematic uArch would be plausible now.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
"Samsung has developed Tizen based on RISC-V for developers, and a new SDK for Tizen applications designed with RISC-V will be available in 2026."

HarmonyOS variants and Tizen, I guess is going to duke it out in the Smart TV RISC-V arena eventually. Assume upcoming Smart TVs as game consoles down road.
 
Last edited:

soresu

Diamond Member
Dec 19, 2014
4,115
3,570
136
"Samsung has developed Tizen based on RISC-V for developers, and a new SDK for Tizen applications designed with RISC-V will be available in 2026."

HarmonyOS variants and Tizen, I guess is going to duke it out in the Smart TV RISC-V arena eventually. Assume upcoming Smart TVs as game consoles down road.
I was under the impression that Tizen was absorbed into Android Wear OS.
 

soresu

Diamond Member
Dec 19, 2014
4,115
3,570
136
I feel that because Google is getting futher and further from Android, Samsung is preparing Tizen as backup
Dunno about them getting further from it.

If anything they refocused on Android after diverting attention to Chrome for a few years, and now Android 16 is expected a quarter earlier than usual.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136

RVA23 with the other option flag = RISC-V Application Processor Version 3.0 profile
// RVA30 with the other option flag = RISC-V Application Processor Version 4.0 profile (about a decade gap between RVA23 <-> RVA30)
// RVA40 with the other option flag = RISC-V Application Processor Version 5.0 profile (about a two decade gap between RVA23 <-> RVA40)
Which this will probably be available for hardware target by Android 16.

Do to licensed cores not sure how fast the launch of actual devices will be.
 
  • Like
Reactions: Kryohi

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
More recent C930 slides:
xuantie-c930.jpg
c930.jpeg

PC, Edge, Auto, Servers are the targets
15-stage pipeline, 6-wide decode, 10+ -wide issue, 512-bit Vector Length, 8 TOPS custom instruction Matrix engine, 1 MB L2 private cache + 2x64KB L1i and L1d.
Supports Multi-core and Multi-cluster

Allwinner might be returning to PC targeted products with the above processor.
 

DZero

Golden Member
Jun 20, 2024
1,630
634
96
Any info about nm process and core config? Seem that going Octa would be a fact and my theory of seeing a future RISC-V nintendo console might take impulse with that.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
Any info about nm process and core config? Seem that going Octa would be a fact and my theory of seeing a future RISC-V nintendo console might take impulse with that.
Depends on who is using the part. C930 on 12-nm can be a thing and C930 on 6-nm can be a thing. Based on Xuantie website, it can be 8-core to 32-core. Unless they bring out a 256-core interface like StarFive.
Again specint 2006. The compilers are still unable to generate code for specint 2017 (which someone here told was the reason for not using it)?
They are still not tuned as completely for RISC-V yet.
akeana.jpg
Akeana seems the furthest along the RVA23 group. They still use specint2006 as well. Of which, they have a customer in "mobile compute" application.
akeana2.jpg
 

MS_AT

Senior member
Jul 15, 2024
870
1,767
96
Do you also have some information how high can it clock? As score per GHz comparison doesn't paint the full picture about the relative absolute performance
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
Do you also have some information how high can it clock? As score per GHz comparison doesn't paint the full picture about the relative absolute performance
If Akeana;
1100/1200 in-order = ~2 GHz
1300 out-of-order = ~3 GHz Fmax
5100/5200/5300 = ~3 GHz Fmax

Overall, we have to wait for actual products. Since, the Fmax disclosed might not be used in the product. It isn't like some products where they announce clocks with the product; https://cloudbear.ru/bi_671.html

___ RISC-V appears to hit the same as x86-64 in openkylin ___
openKylin-2.0-riscv64.iso2.5 GiB10/14/2024, 04:26:06 AM
openKylin-Desktop-V2.0-Release-x86_64.iso4.7 GiB08/09/2024, 08:37:08 AM
openKylin-Embedded-V2.0-Release-RuyiBook-riscv64.tar.xz2.3 GiB08/08/2024, 12:53:45 AM
openKylin-Embedded-V2.0-Release-coolpi-arm64.img.xz2.1 GiB08/07/2024, 12:12:31 AM
openKylin-Embedded-V2.0-Release-eic770x-riscv64.tar.xz2.8 GiB09/20/2024, 12:00:32 AM
openKylin-Embedded-V2.0-Release-licheepi4a-riscv64.tar.xz2.4 GiB09/19/2024, 06:05:39 PM
openKylin-Embedded-V2.0-Release-milk-v-pioneer-riscv64.img.xz3.6 GiB08/06/2024, 08:37:25 PM
openKylin-Embedded-V2.0-Release-phytiumpi-2G-arm64.img.xz1.9 GiB08/06/2024, 10:57:51 PM
openKylin-Embedded-V2.0-Release-phytiumpi-4G-arm64.img.xz1.9 GiB08/06/2024, 10:57:51 PM
openKylin-Embedded-V2.0-Release-raspi-arm64.img.xz2.1 GiB08/06/2024, 11:53:09 PM
openKylin-Embedded-V2.0-Release-spacemit-k1-riscv64.img.xz2.6 GiB09/19/2024, 06:05:39 PM
openKylin-Embedded-V2.0-Release-visionfive2-riscv64.img.xz4.0 GiB08/07/2024, 02:34:53 AM
First one appears to be a generic desktop release, simply called universal.

Starfive upgraded their Dubhe-80 to support RVA23.
Dubhe-83 complies with the RVA23 specification and meets the baseline requirements of the Android RISC-V ABI. Dubhe-83 supports a rich RISC-V instruction set, including RV64GC, bit operation extension B (Bitmanip 1.0), vector extension V (Vector 1.0), virtualization extension H (Hypervisor 1.0) and vector encryption operation (Vector Crypto) related extensions.

Unisoc is the partner to Starfive's RISC-V on 5G Terminals, Smartphones, etc. Dubhe-83 ~ Cortex-A75. Which is relevant in some embedded ranges;
ARM Cortex-A73 * 4 @ 2.2 GHz ~ 1.5 GHz for Router series 1, Wi-Fi 7
ARM Cortex-A53 * 4 @ 1.5 GHz ~ 1.1 GHz for Router series 2, Wi-Fi 7
ARM Cortex-A55 * 4 @ 1.8 GHz for Router series 3, Wi-Fi 7
 
Last edited:
  • Like
Reactions: igor_kavinski

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
Imagination Technology has stopped RISC-V CPU development to concentrate on GPU and AI IPs.

Looks like they dissolved the team in September 2024; "All the cons are based on my experiences in the CPU division which was dissolved in Sept 2024." - Glassdoor
 

DZero

Golden Member
Jun 20, 2024
1,630
634
96
Imagination no longer has CPU division... I don't be surprised if they ends being bought totally...
 

Shivansps

Diamond Member
Sep 11, 2013
3,918
1,570
136
The main problem i see with RISC-V is that performance wise, they are REALLY behind ARM... For example one of the SoCs that are being used a lot these days is the Spacemit K1... That thing is around Cortex A53 speed...

I brought a Sipeed Lichee PI 3A to check it out, and that thing is not that much faster than my old RPI3 that i got rid of years ago.

The Imagination GPU dosent have support in Mesa yet, imagination is really lagging behind is sending the patches to Mesa... and thats the gpu used in these RISC-V designs.

Also i feel like compilers are either lagging behind in optimizations OR RISC-V is missing some instructions and certain tasks run really slow.


Thats really slow even for software, A53 can do better than that.

Chis tested the HiFi premier P550 and still feels very slow to me,

Thats what? A72 level?
 
  • Like
Reactions: Nothingness

Thibsie

Golden Member
Apr 25, 2017
1,127
1,334
136
Imagination no longer has CPU division... I don't be surprised if they ends being bought totally...
They had one ? Gosh ! 😁
Don' t it changes much of anything...

The only obvious reason I can find as why they haven't been eaten yet is patents.
 
  • Haha
Reactions: Nothingness

DZero

Golden Member
Jun 20, 2024
1,630
634
96
They had one ? Gosh ! 😁
Don' t it changes much of anything...

The only obvious reason I can find as why they haven't been eaten yet is patents.
And even with that, if someone buys the patents it will be over for Imagination.
They had the MIPS uArch.
 
  • Like
Reactions: Thibsie

Doug S

Diamond Member
Feb 8, 2020
3,585
6,330
136
The main problem i see with RISC-V is that performance wise, they are REALLY behind ARM...

That's not a problem with the RISC-V ISA, it is a problem of ROI. It costs a lot to design a maxed CPU on a leading edge process able to compete with the latest from Apple, Intel and AMD. If you are investing that type of money there has to be sufficient profit on the backend to justify that investment - and to compensate you for your risk that the demand for your product may not appear.

RISC-V has a lot of advantages on the low end for embedded stuff. Saving a few dimes in ARM licensing cost can make a big difference there. It makes very little difference at the scale of a leading edge smartphone let alone a PC or server. I just don't see any market for a high end RISC-V CPU.

People who read forums like this might want one to exist, might even be willing to buy one if it exists. But it needs a mass market to justify the investment, and that mass market does not exist. It would need something earth shattering to happen in ARM world, like if ARM had won its lawsuit against Qualcomm and Qualcomm's license was canceled and they were forced to pull everything Nuvia related off the market. Since that didn't happen they'll have to hope a startup can con some VCs into believing the high end market for exists and writes them a check for a few hundred million dollars to fund its development, fabrication and sale or that some Qualcomm type comes along and buys them before they reach market (probably for the team if they assemble a really good one, not for the RISC-V core)
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
The main problem i see with RISC-V is that performance wise, they are REALLY behind ARM... For example one of the SoCs that are being used a lot these days is the Spacemit K1... That thing is around Cortex A53 speed...
Have to wait for RVA23. As it is the next big thing to come by other than the RVA17/20 profiles.

Just a core for example;
Spacemit M2 should be x100. Rather than the K1/M1 which is the in-order x60.

RVA23 and its iterations are on-par with ARMv9/Neoverse. The big one is the replacement of Alibaba's Yitian 710, g8y instance with their own C930 core, g9y being RISC-V. There is also Baidu with StarFive which is meant to replace the Intel Xeon Platinum 8350C at C5/G5/M5 stuff.

Dubhe Max(JH9000 core) ~= SiFive's P670 with RVA23 while the Dubhe Extreme(Baidu core) ~= >P870/~Napa, SiFive's.

Hypothetically, as I can't double find it. Tencent is expected to enter handhelds in 2026. With a RISC-V console. However, it should be using their own core which hasn't been announced yet. Nintendo's Switch 2 = Cortex A78, Tencent's Switch competitor = Cortex-A725 but RISC-V.

AI/ML/Graphics workgroup will probably be announcing a profile soon:
xisariscvgpgpu.jpeg
riscgpgpusoftwarestack.jpeg
For these style products. Imagination might be swept away to have software cores that are tuned for graphics instead. Can't link it yet, but one of the IP providers Vulkan output is faster than BXE-4-32 vulkan output for less area/power.
 
Last edited:

soresu

Diamond Member
Dec 19, 2014
4,115
3,570
136
Imagination might be swept away to have software cores that are tuned for graphics instead
All programmable cores are software cores.

GPUs just have hyper tuned fixed function circuits to augment the graphics specific workloads.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
All programmable cores are software cores.

GPUs just have hyper tuned fixed function circuits to augment the graphics specific workloads.
Not how most GPGPU programmers generally go...

GPUs = Hardware cores
CPUs w/ GPU functionality(X-ISA; TMU/ROPs/Framebuffer/etc) = Software cores

Software cores = one driver across all implementations, core A -> core B = same driver, same Vulkan/DirectX/OpenGL "driver" across all products that support RVG/RVX or whatever the profile name will be.
Hardware cores = many drivers across each implementation, navxx -> navxy = different driver, also this;
rdna2v3.jpeg

RISC-V GPGPUs are focused towards accelerated software rendering.
While Videocore, PVR, RDNAx, CUDAx are all hardware renderers. Where adding new functions not in hardware are costly.

Software cores in this case is the workload. Which is software rendering.
BXE-4-32, (600 MHz) = Stuck with what it has.
Software core, (500 MHz, faster/lower power/lower area) = Can add additional functionality. Like additional texture compression styles, etc.

Rather than Native Hardware it is near Software/CPU-based fallback path. RVA23 OoO + RVA23&&RVX10 InO. The only path is Software with extensions.
 
Last edited:

soresu

Diamond Member
Dec 19, 2014
4,115
3,570
136
Software cores = one driver across all implementations
Now you are just splitting hairs over ISA differences between the different GPU vendors.

If they all had the same ISA then they would not have this problem.

But at the end of the day all those shaders are still software.

Software core, (500 MHz, faster/lower power/lower area) = Can add additional functionality. Like additional texture compression styles, etc.
GPUs can do this too - it's not optimal vs fixed function hardware, but you can do texture de/encoding on the fly inside the GPU, and the modern GPU pipeline has become far less fixed function bound than it used to be owing to solutions like Nanite heavily relying on compute shaders.