Question A new CPU paradigm from Efficient.Computer

Jul 27, 2020
24,071
16,813
146


Even if it is successful, it's gonna be 5 years or more before we see this in consumer devices. Developing software for it from scratch is gonna be no simple task. Unless people suddenly start using Linux.
 

Abwx

Lifer
Apr 2, 2011
11,782
4,689
136


Even if it is successful, it's gonna be 5 years or more before we see this in consumer devices. Developing software for it from scratch is gonna be no simple task. Unless people suddenly start using Linux.

That s just outrageous claims to have money from the unsuspicious crowds flowing in their pockets, more likely a scam than anything else.
 

Hitman928

Diamond Member
Apr 15, 2012
6,613
12,118
136
Sounds eerily similar to both the approach and claims that Tachyum made with their original Prodigy CPU before their roadmap blew up in their faces and they had to completely redesign the CPU from the ground up, dramatically change their claims, and push the release date years out (with still no silicon to speak of to date).
 
  • Like
Reactions: Kryohi

soresu

Diamond Member
Dec 19, 2014
3,708
3,037
136


Even if it is successful, it's gonna be 5 years or more before we see this in consumer devices. Developing software for it from scratch is gonna be no simple task. Unless people suddenly start using Linux.
Something something dataflow architecture.

Sounds like the EDGE ISA that MS Research was pursuing for a while under the E2 name back in the 2010s, sadly that went nowhere and I don't think any dataflow architecture has ever managed to break in since.
 

soresu

Diamond Member
Dec 19, 2014
3,708
3,037
136
This blurb makes me dubious.

eff_cpu_01.jpg

Call me crazy, but aren't FPGA's basically reconfigurable logic gate arrays?

Therefore potentially as software programmable as whatever processor configuration you can manage to stuff into the gate limits of the SKU?

Makes me think they are trying to smack talk a competitor before they are even competing, which doesn't bode well.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,946
15,925
136
So, they are going to to this at 0.256 watts ?? I don't think so....


+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| 41% 64C P2 256W / 320W | 689MiB / 16376MiB | 89% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2645 G /usr/lib/xorg/Xorg 120MiB |
| 0 N/A N/A 3283 G cinnamon 19MiB |
| 0 N/A N/A 3552529 G ...2gtk-4.0/WebKitWebProcess 12MiB |
| 0 N/A N/A 3758478 C ....3/Core_23.fah/FahCore_23 532MiB |
+-----------------------------------------------------------------------------+
 
Jul 27, 2020
24,071
16,813
146
It depends on the AI algorithm and also how the efficient cores distribute the workload amongst themselves.

Low power neural network mimicry isn't science fiction: https://news.mit.edu/2022/analog-deep-learning-ai-computing-0728

I have a vested interest in seeing AI on GPU become obsolete. The faster it happens, the faster we can get back to a world where GPUs are used primarily for super realistic graphics rendering, their actual and original purpose.