News Adaptable transistors could reduce CPU transistor count by 85 percent

igor_kavinski

Diamond Member
Jul 27, 2020
3,198
1,754
96

soresu

Golden Member
Dec 19, 2014
1,822
974
136
The actual paper references the Negative Differential Resistance effect which isn't a new field of study.

It's sufficiently different from anything currently used that it would not be viewed as a standard next gen CMOS device tech as Nanosheet, Forksheet and VTFET are.

(though I could be wrong there, don't quote me on that!)

Edit: Here's the academic paper PDF link here.
 

maddie

Diamond Member
Jul 18, 2010
3,889
2,967
136
Adaptable transistors could reduce CPU transistor count by 85 percent | TechSpot

Professor Walter Weber, another member of the team, said an arithmetic operation that previously required 160 transistors is now possible with just 24 transistors thanks to the new design. At that rate, it doesn’t take much imagination to envision how this breakthrough could be scaled to significantly impact efficiency and operating frequency.

I.N.S.A.N.E.
Re-configuring circuits on the fly, so less can do the work of many. A block of transistors can now produce different gate arrangements.

"At that rate, it doesn’t take much imagination to envision how this breakthrough could be scaled to significantly impact efficiency and operating frequency."

I think the opposite will be true. A lot of work and imagination will be needed to build the tools required.
 

igor_kavinski

Diamond Member
Jul 27, 2020
3,198
1,754
96
This could be used to create a hardware emulation co-processor, where it is able to emulate different architectures.
 

Doug S

Golden Member
Feb 8, 2020
1,200
1,742
106
This will probably end up being like asynchronous logic - something that seems like it would be a potential huge win but isn't used outside a few tiny niches. For asynchronous logic, due to a lack of tools because no one would be willing to risk the sort of impossible to reproduce bugs that would result, for "adaptable transistors" because almost no one is using germanium as a base layer anymore - even BJTs have largely moved to silicon or gallium arsenide.

The other problem is that transistors are basically free. Even if this worked for standard silicon, Apple has an SoC with 57 billion transistors, and the N3 version will likely hit nearly 100 billion. It isn't like you could reduce that number by 85%, since this doesn't work for cache or cache like structures which is where so many transistors are consumed. You could probably realize some savings (without full blown tool support) in regular repeating structures like inside a GPU core or DSP blocks.

Even there the benefit remains unclear - just because you reduce the number of transistors by 85% in certain areas doesn't necessarily mean you reduce the AREA by 85% - because these transistors need additional control to reconfigure them, and maybe have other differences in how they are made. If you really could shrink the area used by those types of structures by 85% you might save maybe 25% of the overall area of a typical modern SoC. Nice, but hardly a game changer. There's also no reason to believe (despite what the article says) that this would result in any power savings beyond per transistor leakage current. So sounds nice, but would hardly make a major impact on the end user that asynchronous logic would if it were ever made to work.
 

Doug S

Golden Member
Feb 8, 2020
1,200
1,742
106
Potential usage in an iGPU, to build an in-field-upgradable video decode block?
No, because "adaptable" doesn't mean "can perform a totally different function that isn't just a bunch of small FP units in parallel", and it won't work on silicon wafers at all.
 

Mopetar

Diamond Member
Jan 31, 2011
6,750
3,821
136
Edit: Here's the academic paper PDF link here.
The link isn't working for me. Even after removing what I thought was some extra garbage from the URL it just gives me an error.

Re-configuring circuits on the fly, so less can do the work of many. A block of transistors can now produce different gate arrangements.

"At that rate, it doesn’t take much imagination to envision how this breakthrough could be scaled to significantly impact efficiency and operating frequency."

I think the opposite will be true. A lot of work and imagination will be needed to build the tools required.
I can imagine a simple situation where blocks of logic can be changed between two different configurations that would cut down on the number of required transistors, but I have some questions about how quickly this shift can occur which could limit the usefulness of this in a general case.

Even putting that aside, it seems like anything that gets too far beyond that simple case starts to need additional logic in order to figure out how to reconfigure the various transistors or keep track of information about their state.
 
  • Like
Reactions: Tlh97 and maddie

igor_kavinski

Diamond Member
Jul 27, 2020
3,198
1,754
96
Here is something from within the article. Not sure if it's the same as linked above.
The fusion of electron and hole conduction together with negative differential resistance in a universal adaptive transistor may enable energy-efficient reconfigurable circuits with multivalued operability that are inherent components of emerging artificial intelligence electronics.
Could be conceptually related to Intel's neuromorphic computing.
 

soresu

Golden Member
Dec 19, 2014
1,822
974
136
Potential usage in an iGPU, to build an in-field-upgradable video decode block?
Nice idea, I've thought of this before since AMD acquired Xilinx but this would give a more area efficient implementation.

Not just decode but encode too.

Decode quality/capability shouldn't change much over multiple HW generations for a single codec, but encoder quality can change drastically - just as SW encoder implementations can over the years.
 

soresu

Golden Member
Dec 19, 2014
1,822
974
136
Could be conceptually related to Intel's neuromorphic computing.
Possible for certain, but I got the impression that their MESO logic/memory process research was angled towards ML and neuromorphic computing as an early use case, and likely to replace CMOS for them after CFET devices.
 
Thread starter Similar threads Forum Replies Date
MadRat CPUs and Overclocking 1

ASK THE COMMUNITY