When will we have CPUs that are 10x as fast as the current ones?

Fjodor2001

Diamond Member
Feb 6, 2010
4,110
538
126
Hi,

I just wonder how long you think it'll take until we will have CPUs that are 10x as fast as the current ones?

For there to be any major disruptive computational difference, something like that is what it'll take.

Assuming current progress of ~5% per year, we'll get there in x years, where 1.05^x = 10. That would mean around 47 years (1.05^47 = 9.9). That would be after death or at least by the end of the life for most people on this forum depending on where you are born. :eek:

Is it really that bad!? Isn't there any major technology shift on the horizon? Otherwise all of us might as well just check out of this forum by now already.

Sorry for sounding so pessimistic, but isn't this just the reality we are facing? :confused:
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Well, maybe we can look back and get an idea.

We could start with a well known home desktop processor like the 486-DX2 66.

That would have been in a home computer in about 1993, I guess?

How much faster is an i3-4360 of today, 22 years later?
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,110
538
126
Well, maybe we can look back and get an idea.

We could start with a well known home desktop processor like the 486-DX2 66.

That would have been in a home computer in about 1993, I guess?

How much faster is an i3-4360 of today, 22 years later?

Problem is CPU performance don't increase as much per year these days as it used to, so history does not tell us much.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,572
10,208
126
Well, are you talking about ST performance, or MT performance? With 14nm CPUs, Intel could easily give us 8C/16T on the mainstream platform, or adapt the 8-core Xeon-D as member "cbn" is fond of, into a HEDT-Lite platform, and immediately double our theoretical MT performance.

Thing is, I think that Intel is really missing out on a trick here. unRAID server is offering Virtualization services native to the Linux kernel, such that each household could have a machine with server-class hardware, running all of a user's apps in a VM, and piping the I/O to something like a NUC, running as the client box.

Intel, IMHO, should really be pushing for this, with the declining desktop market. That way, they would sell a lot more (somewhat expensive) NUC units, along with server-class CPUs, to the home.

Think of it this way. Individual PCs are like Window A/C units. You can install one per person, per room, and have all of the cost, noise, and maintenance involved, or you can get Central Air installed into the building. A central server, with NUCs for each person, would be a lot like having Central Air.
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
Honestly, barring some major breakthroughs, we may never see a 10x general performance increase (at least with the likes of x86 cpus). We are getting very close to the minimum node size for a CPU, in 10 or 20 years we may see the last die shrink for a silicon based CPU.

But not all is bleak. Programmable hardware might provide some juicy performance increases. New architectures like Mill might offer us with interesting performance implications. And if all else fails, we might end up ditching the clock and pushing for a fully asynchronous CPU (wouldn't that be a sight!).

I imagine that future CPUs might focus more on exposing instruction parallelism to the compilers of the world. Perhaps we allow compilers to insert instructions in bundles which don't guarantee completion ordering. This might allow the CPU to do some pretty neat things.

Perhaps the CPU introduces something along the lines of an immutable data pipeline to allow for higher parallelism (similar to how GPUs work). Maybe a program registers data transformations with the CPU and then the CPU automatically starts pushing data through those data transformation pipelines. So, for example, lets say you are going over a list of customers, calculating their annual tax, and then subtracting some number from it. You could translate that into 3 transformation kernels, one that loads data from memory, Another that as the data streams in from the first starts to calculate annual tax information, and then another that as the tax information streams out starts to subtract the number from it. We can already sort of do this in software, however, the overhead of threads/context switching/etc is pretty high. Maybe a CPU level support for this sort of thing would increase performance.

The thing I don't see us doing, sadly, is making a program that we have today run 10x faster without any changes. I think instead we will see new techniques/instructions/processes emerge which allow programs to be rewritten in a way that increases performance 10x.
 
Aug 11, 2008
10,451
642
126
The average for intel is about 8% per year, ticks=5%, tocks=10%. That would give a doubling in about ten years. But I think future progress could be even less than that, especially on the desktop. That is only looking at single thread performance though. A sure way to increase performance faster is to add more cores or even gpu compute, but the software has to take advantage of it.

That said though, I cant really think of anything I do on my desktop that is really held back by my Sandy Bridge i5. Even for gaming, I need more gpu power (I have only HD7770), not cpu power. Now if I could get the power of my desktop in a tablet or phone size device, all day battery life, and for 500.00, I would be all over that. I think this could realistically happen in 5 years, except I would guess the price would be well north of 1000.00.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,110
538
126
The average for intel is about 8% per year, ticks=5%, tocks=10%. That would give a doubling in about ten years.

Problem is we're not seeing 1 year between each tick and tock in general these days. More like 18 month, i.e. closer to 5% yearly increase.

But even at yearly 8%, that's still 30 years until we reach 10x (1.08^30 =10.1).
 

Hulk

Diamond Member
Oct 9, 1999
5,118
3,661
136
That said though, I cant really think of anything I do on my desktop that is really held back by my Sandy Bridge i5. Even for gaming, I need more gpu power (I have only HD7770), not cpu power.


I agree with the above. I think there will have to some killer application that requires enormous compute to drive the hardware. Perhaps really, really good AI where you just talk to your computer and it "gets it" and does what you say. You know, Skynet type stuff because we all know how well that'll work out.
 

Azuma Hazuki

Golden Member
Jun 18, 2012
1,532
866
131
On silicon? Never. Certainly not with the x86 uarch anyway. I am guessing we'd need to get into exotic processes like III-V or graphene first, maybe optical interconnects.

You have to understand, we're dealing with things so small here that quantum effects are causing significant issues. Probabilistically, some electrons are going to tunnel themselves straight through the gate and it's enough, in proportion to total current, to be noticed.

x86 itself is also a pretty ugly architecture thanks to 30+ years of baked-in kludges...makes me wonder what a massively-improved MIPS core with a 95W thermal budget and the kind of ring bus Sandybridge has would perform like.
 

Hulk

Diamond Member
Oct 9, 1999
5,118
3,661
136
On silicon? Never. Certainly not with the x86 uarch anyway. I am guessing we'd need to get into exotic processes like III-V or graphene first, maybe optical interconnects.

You have to understand, we're dealing with things so small here that quantum effects are causing significant issues. Probabilistically, some electrons are going to tunnel themselves straight through the gate and it's enough, in proportion to total current, to be noticed.

x86 itself is also a pretty ugly architecture thanks to 30+ years of baked-in kludges...makes me wonder what a massively-improved MIPS core with a 95W thermal budget and the kind of ring bus Sandybridge has would perform like.


Didn't Apple try that with the PowerPC? Intel responded with micro-ops. Now PowerPC is sleeping with the dinosaurs.
 

Xpage

Senior member
Jun 22, 2005
459
15
81
www.riseofkingdoms.com
I think a switch to a new material will allow for a significant improvement, perhaps doubling but nowhere near a 10x increase. I think that will happen at the 7nm node with III-V materials, though it would only be for boosts at running that fast will be too much of a thermal load.

I think after a few years of stagnation a new uarch will be devised that will be faster then add an x86 abstraction layer like transmeta for compatibility while your homegrown software grows for your ecosystem.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
x86 isnt the issue. x86 CPUs already use a translation layer. If you want more performance you need to go the IA64 route with heavy dependency on compilers.

The only place to see 10x performance would be in the server space. However 10x performance/watt, now thats another matter.

Skylake desktops if we exclude K models is 65W max. And that will only keep going down. Its hard to imagine regular desktop SKUs above 35W with 7nm. Because this is what people want. More efficient and more integrated. Hence this is what companies develop after.
 
Last edited:

StinkyPinky

Diamond Member
Jul 6, 2002
6,956
1,268
126
It is impossible to estimate. We simply don't know what advancements (or lack of) will be made over the coming decades. Maybe they'll move to graphene and improve performance 100x for all we know.
 

Soulkeeper

Diamond Member
Nov 23, 2001
6,732
155
106
I'd say about 15 years or so. 30 core desktops @ 5ghz , with 3x the IPC will be normal by then.

I'm somewhat sharing your thoughts here
unfortunately 15yrs might not be too far off if we stick to 10 or 15% performance increases every 6-8 months
~24 compounds at 10% for 10x :/
 
Last edited:

MongGrel

Lifer
Dec 3, 2013
38,466
3,067
121
Not in my lifetime i imagine.

And if they do get that fast if they have any AI, will be the whole thing Hawking and others have been worried about I suppose.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
When will we have CPUs that are 10x as fast as the current ones?

Step 1: Define "current ones"

Step 2: Define the "as fast as" conditions - what software, what workload, etc. 10x faster doing what?

Step 3: Define the imagined demographic that will be purchasing this 10x processor - is it a mobile product, a desktop, a server, what pricepoint, what formfactor, what power consumption, etc.
 

ClockHound

Golden Member
Nov 27, 2007
1,111
219
106
Step 1: Define "current ones"
5690x at 4.6ghz.

Step 2: Define the "as fast as" conditions - what software, what workload, etc. 10x faster doing what?
Replying to posts on the internet. In 3D. With power gloves and cool visors. And generating important 'what-if' scenario cat videos.

Step 3: Define the imagined demographic that will be purchasing this 10x processor - is it a mobile product, a desktop, a server, what pricepoint, what formfactor, what power consumption, etc.
The 'ME' demographic and my needs. It's a mobile desktop server morphic format that expands to meet the whims of Me within my holographic field of computation while costing less than an iDevice inside a variable whim-driven power envelope at zero cost with the on-chip fusion power generation block.

It would respond to the server morphic mode by generating indeterminate kilowatts of free power, yet at idle in mobile-morphic mode would use no power at all. In fact, while sitting idle in my Jetson Jeans pocket it would ensnare unsuspecting quarks and other high energy particles and store them in the onboard non-volatile 10 Shilentnobyte holographic cache for later analysis and flashy cat video props.

Basically, a simple, inexhaustible, fully overclockable computing platform for simple needs like mine.



:biggrin:
 

StinkyPinky

Diamond Member
Jul 6, 2002
6,956
1,268
126
Is that realistic? What are the odds of that, roughly? Is there any indication that graphene might result in 100x performance increase? :confused:

Probably not. My point being that we cannot see the future. I can imagine some nerds sitting in some comic shop in the 1960's saying the same thing, and then they figured out a way to use transistors.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
Hi,

I just wonder how long you think it'll take until we will have CPUs that are 10x as fast as the current ones?

For there to be any major disruptive computational difference, something like that is what it'll take.

Is it really that bad!?
Otherwise all of us might as well just check out of this forum by now already.

What for would you even need any major disruptive computational difference?
Seriously,even people with stock first gen fx or first gen core cpus claim that everything runs fine and they don't have any problems with anything.
Not that I or anyone else would shun it but also nobody will miss it.