How will CPU processing power be increased when transistors can't be shrunk farther?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Feb 25, 2011
16,997
1,626
126
You mean that there are people out there who try to understand what IDC is saying? Most of the time when I see a post by him I just start bobbing my head up and down pretending to have a clue as to what he is talking about.

On another note try to read 1632 by Eric Flint.

Bah. Flint's infodump style pales - PALES, I SAY! - in comparison with a David Weber Infodump™.

:awe:

Anyway, OPs question has been answered and reanswered over the years, and IDC has the right of it. Newer and better CPU designs, additional instructions, materials breakthroughs we haven't accounted for, and ultimately, a completely different fundamental approach to computing (anything from trinary on up to quantum or DNA computing.)
 
Last edited:

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
I like these gedankenexperiment type threads :)

Performance can be increased by one of two ways - (1) increase the rate at which work gets done (more clocks per second), or (2) increase the amount of work that gets done per clock.

Increasing the amount of work that gets done per clock can be done by complicating the compute model itself - move beyond binary.

Ternary (3-bit states), analog (spectrum of states), quantum, etc.

Making the compute models more complicated (and scaling that complication ever more so going forward) is one way to get more work done per clock cycle within an existing physical circuit mechanism (transistors of silicon, etc).

Keeping the same compute model, binary logic, and just pushing the number of cycles per second that transpire can continue for a ridiculous period of time yet. The current method of getting more cycles is by way of making traditional circuit components cycle faster and faster.

This clockspeed scaling correlates with physical scaling in real-space.

But scaling can work in something called inverse-space (aka k-space or reciprocal space).

A reciprocal lattice scales inversely proportional to the existing crystal lattice.


Seperately you can create superlattices of ever more sophisticated electronic structures you can create effective switching speeds that exist in a virtual sense, making your compute topology function in a meta-physical domain.

And of course you could go for the ultimate and make scalable reciprocal lattices of superlattices for the ultimate in pushing the physics of the meta-physical devices with which you use for your ternary, or analog, or quantum computing models.

Economics really is the limit. We can push on meta-physical reciprocal lattices for hundreds of years if we wanted (since it scales inversely in real-space, circuits become physically larger so no physical scaling limits).

The physics for all this stuff is already worked out, much the same way as binary computing was worked out long before the invention of the transistor.

What doesn't exist yet is an infrastructure for making it commercially feasible and economically viable. The same as could be said of the transistor when it was first conceived.

But there will be a way, provided there is a will.

Ok, I was with you through most of this - switching to trinary or even more states than that, as they've done in NAND can increase computing work per cycle. Got that. But you lost me in the reciprocal lattice stuff. I understand that silicon is fundamentally a crystal - and crystals are made up of of a lattice of atoms/molecules stung together. I'm honestly intrigued by what you wrote, but I don't understand much of it, and turning to wikipedia - or even the diagrams you linked - doesn't seem to be helping. To quote Reddit, "can you explain what a reciprocal lattice is in the real world and how that could improve computing efficiency using 3rd grader words"? Or more specifically translate this line "And of course you could go for the ultimate and make scalable reciprocal lattices of superlattices for the ultimate in pushing the physics of the meta-physical devices " so that a grade-school child could follow you? :)
 
Last edited:

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Looking just at 2), since the materials stuff is way over my head..

If we're talking about some pretty exotic ways to get more performance per clock we'd may as well talk about the benefits of going clockless too.. Not that I'm any real proponent of that.

Kind of hard to put into words, but I see similar cost/benefit tradeoff arguments between asynchronous and ternary, analog, etc (that, and analog computing models I've seen have tended to be asynchronous).
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
Looking just at 2), since the materials stuff is way over my head..

If we're talking about some pretty exotic ways to get more performance per clock we'd may as well talk about the benefits of going clockless too.. Not that I'm any real proponent of that.

Kind of hard to put into words, but I see similar cost/benefit tradeoff arguments between asynchronous and ternary, analog, etc (that, and analog computing models I've seen have tended to be asynchronous).

As a person who has worked on the design of, and the debug of, a large chunk of clockless logic in a modern largescale CPU, I am not a huge fan of it. Clocks are a bit wasteful - lots of capacitance charging and discharging not to mention guardbands on latch inputs on the clock edges (setup/hold times) and they require a fair amount of routing metal to be efficient (or else you can end up with large clock skew, alternatively, you can engineer it to be efficient, but that requires expensive engineers). But all that said, clockless designs are a pain to debug - you don't have lovely scan latches to scan out to look at - and any form of self-time logic seems to be a magnet for weird electrical problems that no one thought of during the design stage. And you can fix both of these issues with more complex designs - you can add in scan logic, you can add in glitch prevention circuitry and other techniques to avoid electrical issues - but it seems like you start adding so much complexity to it that you end up not that far off from where you started with simple latches and clocks.

And so much of the design - from synthesis, to timing, to formal verification is built on the idea of "breaking" at clock boundaries to keep the logic cones down to something the computers can handle. That also seems like it could be a very difficult problem... although the software side of things seems like it could be solved as well with a bit of creativity... but that goes back to the question of whether or not all of this effort this is that much more efficient than just using clocks. Once you throw in a lot of clock gating into a design, it all starts to look vaguely like an asynchronous design if you stand back and squint your eyes a bit...

I'm not saying it's a bad idea... it's actually really interesting... but it's complex and I'm not sure that the gains are worth the investment.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
As a person who has worked on the design of, and the debug of, a large chunk of clockless logic in a modern largescale CPU, I am not a huge fan of it. Clocks are a bit wasteful - lots of capacitance charging and discharging not to mention guardbands on latch inputs on the clock edges (setup/hold times) and they require a fair amount of routing metal to be efficient (or else you can end up with large clock skew, alternatively, you can engineer it to be efficient, but that requires expensive engineers). But all that said, clockless designs are a pain to debug - you don't have lovely scan latches to scan out to look at - and any form of self-time logic seems to be a magnet for weird electrical problems that no one thought of during the design stage.

Hey Patrick, I saw your other post above and yes I will roll up my sleeves and do my best to paint a more digestible picture :p

But regarding your more recent post above, I think you are best to speak to complexities that come with what it takes to establish and then build-in the infrastructure necessary for design debug and validation and how difficult it is to (1) just get it done right, and have confidence that you have done it right, and (2) make doing it "commercially viable" as well, can't have chips that take 10yrs for verification and what not.

Getting to the moon was never made commercially viable, and it also wasn't physically possible prior to the 1930's with the advent of rocket propulsion. But aerospace matured enough in a few short decades to eventually get humans to the moon.

So I do think this is one of those "dimensions of complexity" that can be increased, if need be, when the time comes and the industry has matured (invested in itself and its tools) for it to make sense to shift an even greater burden off of the hardware and into the compute topology itself. But I'm relying on you to interject some reality-feedback on this notion! ;)

Compilers were like that, in a metaphorical way, as they shifted the complexity of "programming for performance" away from the software programmers and into the hands of the compiler programmers.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Yeah, I don't have any direct experience with looking at clockless logic designs but I was guessing those kinds of problems... Much less deterministic/reproducible, much more difficult and expensive to simulate, sounds like a total nightmare.
 

Ventanni

Golden Member
Jul 25, 2011
1,432
142
106
This stuff kinda reminds me of the theoretical physics/mathematics behind reverse engineering a [theoretical] gravitomagnetic propulsion engine used in [theoretical] UFO spacecraft. Obviously a superlattice structure is more documented and proven than a gravitomagnetic propulsion engine, but it's still fun to think about.
 

A5

Diamond Member
Jun 9, 2000
4,902
5
81
I'm quite interested in scientific topics like as that. A great deal can be learned from Wikipedia articles and the like, but sometimes it's just to much :D I just don't have the physics / math knowledge and I find myself clicking on all the links in the page to go up the knowledge ladder to simpler topics.

I'm about to finish a Master's in EE studying a lot of this material stuff and I still barely understand it. Don't feel too bad - this stuff is way out there in terms of how people think of stuff in the physical world :p
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
As a person who has worked on the design of, and the debug of, a large chunk of clockless logic in a modern largescale CPU, I am not a huge fan of it. Clocks are a bit wasteful - lots of capacitance charging and discharging not to mention guardbands on latch inputs on the clock edges (setup/hold times) and they require a fair amount of routing metal to be efficient (or else you can end up with large clock skew, alternatively, you can engineer it to be efficient, but that requires expensive engineers). But all that said, clockless designs are a pain to debug - you don't have lovely scan latches to scan out to look at - and any form of self-time logic seems to be a magnet for weird electrical problems that no one thought of during the design stage. And you can fix both of these issues with more complex designs - you can add in scan logic, you can add in glitch prevention circuitry and other techniques to avoid electrical issues - but it seems like you start adding so much complexity to it that you end up not that far off from where you started with simple latches and clocks.

And so much of the design - from synthesis, to timing, to formal verification is built on the idea of "breaking" at clock boundaries to keep the logic cones down to something the computers can handle. That also seems like it could be a very difficult problem... although the software side of things seems like it could be solved as well with a bit of creativity... but that goes back to the question of whether or not all of this effort this is that much more efficient than just using clocks. Once you throw in a lot of clock gating into a design, it all starts to look vaguely like an asynchronous design if you stand back and squint your eyes a bit...

I'm not saying it's a bad idea... it's actually really interesting... but it's complex and I'm not sure that the gains are worth the investment.

Just to throw this in, NOW it isn't commercially viable. However, I think an asynchronous designs are ultimately going to be viable when our cease to be able to shrink transistors.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Just to throw this in, NOW it isn't commercially viable. However, I think an asynchronous designs are ultimately going to be viable when our cease to be able to shrink transistors.

I suspect a fair amount of the complexity will become commercially viable once the computers we build are turned towards the task at hand.

Take for example the kinds of circuit design complexities that are present in today's smartphone just in the antenna alone. Modeling those circuits wasn't even possible 30-40 yrs ago, not that they needed them back then.

But today's modern computing capabilities make that stuff possible now.

I trust the machines will do a bang-up job figuring out how to make smarter faster versions of themselves ;) :p

edit: meant to link to this article too.
 

Sleepingforest

Platinum Member
Nov 18, 2012
2,375
0
76
If one were interested in getting involved, would it be better to go with materials science or EE? Or is a background in both equally important?
 

Hitman928

Diamond Member
Apr 15, 2012
6,753
12,492
136
If one were interested in getting involved, would it be better to go with materials science or EE? Or is a background in both equally important?

Depends on what you're trying to get involved in. Are you interested in more circuit design / architecture or the physical design of the transistors (or whatever new device may come to fruition)?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
If one were interested in getting involved, would it be better to go with materials science or EE? Or is a background in both equally important?

Either will serve you well, but will take you down different paths.

MSE (materials science and engineering) will take you into the process node development path where you'll learn all you need to know about superlattices and reciprocal space, and more :sneaky:

EE (electrical engineering) will take you down the path of circuit design/validation/debug and so forth.

Not that you need either degree to do either job, it is just you will find yourself excelling in those fields with those degrees.
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
Either will serve you well, but will take you down different paths.

MSE (materials science and engineering) will take you into the process node development path where you'll learn all you need to know about superlattices and reciprocal space, and more :sneaky:

EE (electrical engineering) will take you down the path of circuit design/validation/debug and so forth.

Not that you need either degree to do either job, it is just you will find yourself excelling in those fields with those degrees.

MSE will get you primed for more researchy things and has the added benefit of being very cross-discipline cutting. Whereas EE puts you in the prime position to be an engineer grunt :) . EEs tend to graduate and start right away doing things like circuit verification, driver programming, etc. The boring stuff :).

Honestly, a physics major is a pretty good route to go if you really want an open career path. They are prime candidates for most engineering jobs.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
F*ck some of you guys are too smart for your own good but you don't take into account simple economics and the economies of scale.

Maybe you shouldn't have a degree to see this but people / companies are lazy will usually just try to build off what came before in order to improve the efficiency of something. I see software optimization being the primary way of increasing speeds once we hit a materials limit of producing new processors.

We've been fortunate enough with Moore's law to get away with lazy programming as Moore's law has been good enough to augment how shit*y our code has been up this point in time in wasting many of the hardware advances that have been provided for us over the last 10 - 15 years to take advantage of.

Once we hit the brick wall we will be forced to use proper programming methodologies to eeek out all of the performance granted to us.

We need to extract every last ounce out of the existing hardware like we did in the early days of computing before quantum computing becomes a reality to take use back to excess computing power and then repeat the cycle of sloppy programming.
 
Last edited:

Cogman

Lifer
Sep 19, 2000
10,286
147
106
F*ck some of you guys are too smart for your own good but you don't take into account simple economics and the economies of scale.

Maybe you shouldn't have a degree to see this but people / companies are lazy will usually just try to build off what came before in order to improve the efficiency of something. I see software optimization being the primary way of increasing speeds once we hit a materials limit of producing new processors.

We've been fortunate enough with Moore's law to get away with lazy programming as Moore's law has been good enough to augment how shit*y our code has been up this point in time in wasting many of the hardware advances that have been provided for us over the last 10 - 15 years to take advantage of.

Once we hit the brick wall we will be forced to use proper programming methodologies to eeek out all of the performance granted to us.

We need to extract every last ounce out of the existing hardware like we did in the early days of computing before quantum computing becomes a reality to take use back to excess computing power and then repeat the cycle of sloppy programming.

Why? Why do we need to bust out the assembly when higher level languages can do things fast enough? Why do we need to squeeze out every ounce of performance from the software standpoint when 90% of the time still still spent waiting on things like disk reads or network IO? (Even with SSDs and 10 Gigabit network connections)

Software guys are perfectly capable of squeezing out performance when needed. Hardware guys have, on the other hand, been squeezing out performance every step of the way. The problem is that the hardware guys are running out of places to squeeze performance (Like squeezing blood from a turnip).

Come up with a storage medium that can store as fast as a transistor flip, or network communication the violates the speed of light and then we can talk about how sloppy software is.
 

Xpage

Senior member
Jun 22, 2005
459
15
81
www.riseofkingdoms.com
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
F*ck some of you guys are too smart for your own good but you don't take into account simple economics and the economies of scale.

Maybe you shouldn't have a degree to see this but people / companies are lazy will usually just try to build off what came before in order to improve the efficiency of something. I see software optimization being the primary way of increasing speeds once we hit a materials limit of producing new processors.

We've been fortunate enough with Moore's law to get away with lazy programming as Moore's law has been good enough to augment how shit*y our code has been up this point in time in wasting many of the hardware advances that have been provided for us over the last 10 - 15 years to take advantage of.

Once we hit the brick wall we will be forced to use proper programming methodologies to eeek out all of the performance granted to us.

We need to extract every last ounce out of the existing hardware like we did in the early days of computing before quantum computing becomes a reality to take use back to excess computing power and then repeat the cycle of sloppy programming.

You may not realize it but you built a straw man there and then attacked it as if the straw man was some accurate portrayal of other people's positions in this thread.

There is a reason (economics) the stuff outlined above has not been put into production as yet, and that same reason will keep it from going into production until such time as the issues (economic) are put to bed.

That "goes without saying" type of self-evident stuff is supposed to need go without saying.

The OP asked about "CPU processing power" post-silicon shrinking, there are two paths to take. Efficient coding does not increase processing power, it increases the efficiency with which one utilizes the theoretical IPC of a CPU (efficient coding impacts the realized IPC, not the theoretical IPC).

Improving coding infrastructure is an obvious direction for improving performance as the thriving compiler business attests. An intriguing subject matter in its own right, but not exactly relevant to the OP's thread (unless they intended to cast a wider net and accidentally limited the discussion scope with an unfortunate choice of words).

An increase in processor speed will happena nd thermals will go back up like they did before but the heat generated per mm squared will be significantly higher, requiring better cooling or mroe efficient cooling solutions.

Thus overclocking to 20ghz with Ion cooling*!

http://www.techrepublic.com/blog/tech-news/ion-wind-cpu-cooling-solution-from-purdue-university/1025

*just kidding...maybe

Or maybe everybody will get a vapo-chill system and desktop PC's will go back to being luxury items.

The fields of harvesting energy from existing ambient heat sources and stress gradients is advancing quite rapidly.

The Seebeck effect is routinely used today in thermocouples, taking advantage of the naturally generated voltage potential between dissimilar metals as they are heated.

The Seebeck effect is the conversion of temperature differences directly into electricity and is named after the Baltic German physicist Thomas Johann Seebeck, who, in 1821 discovered that a compass needle would be deflected by a closed loop formed by two metals joined in two places, with a temperature difference between the junctions.

This was because the metals responded differently to the temperature difference, creating a current loop and a magnetic field. Seebeck did not recognize there was an electric current involved, so he called the phenomenon the thermomagnetic effect. Danish physicist Hans Christian Ørsted rectified the mistake and coined the term "thermoelectricity".

We encounter this in today's existing thermoelectric coolers, albeit they are running in reverse in that application.

And then there is the Piezoelectric effect which converts mechanical forces (stress and strain) into electricity.

Mechanical stresses arise when materials with different coefficients of thermal expansion are physically bound to one another (aka thermal mismatch).

There is a real opportunity for energy harvesting in stacked die configurations in which one can envision the waste heat from your CPU being harvesting and converted into enough electricity to power a NAND or DRAM chip stacked on top of the CPU instead of just pushing that heat into a block of copper and then into the air.

That may all sound fanciful now, but that is the thing about science - it seems like magic if you aren't trained in the field, but if you trained in the field it seems obvious and in some ways unavoidable (but requires money to be reduced to practice, and economics is what makes absolutely everything go around).

Science tells us the "what" but it cannot tell us the "when". For the "when" we must look to the consumers (demand forecasts) and those who are well-capitalized (investors and risk takers).

What happens in the meantime, between science and commercialism, is called "science fiction" ;)
 

Eureka

Diamond Member
Sep 6, 2005
3,822
1
81
In this thread, IDC shows off his degrees... using terms like fourier transforms (which I don't think is taught at anything lower than a college level) or a reciprocal lattice (which I barely understand as MSEE) is probably complicating things more than it needs to be.

A fourier transform is a simple way of switching your domain (think back to algebra, with domain and range). We are used to working in time domain... your x-axis is always time. Fourier transfromations is a neat trick of switching that time domain into a different domain, most commonly frequency. This is great because now we don't have to look at say, a signal over time, but rather a signal over frequency (which frequency is strongest or weakest, for example).

Something about reciprocal lattice... instead of representing a spatial lattice (what we think of), we take the fourier transformation on the lattice and now we represent a mathematical lattice. I'm assuming this open up new ways of making relationships, just how so is IDC's way of showing off his knowledge, because I don't have a clue at this point.
 

MisterMac

Senior member
Sep 16, 2011
777
0
0
IDC:

Offtopic sorry - but would you reveal what your current job is?

Not as in specifics - but at a SemiConductor company? doing validation? doing design? planning circuitry?

Im kinda interested :)
 

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
It will be a two pronged approach ~ material science(think graphene) & then software optimization !
As others have said that the physics wall hasn't yet hit Si development, perhaps not for another ten years, but the development costs for node shrinks is getting prohibitively expensive & then there is this small matter of diminishing returns wrt power efficiency & heat dissipation on smaller nodes !

The field of software hasn't always lagged hardware(Vista/Crysis) but with economies of scale & chipmakers cramming more transistors we've had less to worry about on the hardware side of things especially in the last decade.
However as we're approaching physical & architectural limitations of the semiconductor industry based on Si one will eventually have to shift his focus to better stuff(suited to computing) & work towards an economic model that better exploits this new material or a number of them, as the case maybe.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
IDC:

Offtopic sorry - but would you reveal what your current job is?

Not as in specifics - but at a SemiConductor company? doing validation? doing design? planning circuitry?

Im kinda interested :)

I am currently self-employed in the industry of foreign currency exchange. Designing algorithms that autonomously trade (buy/sell) foreign currencies without human involvement in the process.
 
Last edited:

Eureka

Diamond Member
Sep 6, 2005
3,822
1
81
Attacking my character for simply relaying that spot-check is kinda silly and kinda nasty all at the same time.

I didn't mean to come off as attacking your message, I was just trying to clarify it a bit for some of the other readers. I admire your knowledge quite a bit, the showing off part was a bit tongue-in-cheek and I apologize if it didn't come off that way.