what cpu's will we be using in 10 years?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

thegimp03

Diamond Member
Jul 5, 2004
7,420
2
81
Originally posted by: Idontcare
Originally posted by: magnumty
i cant even imagine, 8 cores, 16 cores? it's gonna be wicked.

Well where were we 10yrs ago?

http://everything2.com/index.pl?node_id=1362904

1998 was the year of the 250nm Pentium 2 (Deschutes) and K6-2 (400MHz and 9.4m xtors).

Now we have 45nm quadcore chips with ~730m xtors.

1998-2008
cores 1 -> 4 in 10yrs (4x)

process 250nm -> 45nm in 10yrs (0.18x)

xtors 9m -> 730m in 10yrs (81x)

clockspeed 400MHz -> 3.2GHz (8x)

2008-2018 projected
cores 4 -> 16 in 10yrs (4x)

process 45nm -> 8nm in 10yrs (0.18x)

xtors 730m -> 59.2b (59,200m) in 10yrs (81x)

clockspeed 3.2MHz -> 25.6GHz (8x)

So how does a 16 core, 60billion transistor, 25GHz CPU sound to you?

(what would you do with 60 billion xtors? well you'd need that much to support about 1.5GB of on-die cache and have a few billion left over for your 16 cores of logic)

I likes the idea of such a processor, me hates the prospects of waiting 10yrs to gets.

That's a good take on it. However clockspeed is hard to determine. I remember reading an article in some magazine back in 2000 that said we'd be using 10 ghz cpus by the year 2010. It was much easier going from 400 mhz to 3.2 ghz than it will be going higher than 3.2 ghz without some kind of technological break through because of the amount of heat something running that fast produces.
 

badnewcastle

Golden Member
Jun 30, 2004
1,016
0
0
I'm thinking that we may have a system that can run Cyrsis maxed out and minimum 62 or 63 FPS on 1920x1200 8xAA.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: myocardia
Originally posted by: nerp
So how fast will that load Photoshop? :)

Even Photoshop CS4, which was just released a day or two ago, doesn't benefit from quads. Maybe by then, though, a quad will be faster than a dual-core, in CS18. I'm not holding my breath, though.:D

"NVIDIA QUADRO CX - THE ACCELERATOR FOR ADOBE CREATIVE SUITE 4"

No CPU on the planet can compete with this right now for PS CS4.

I mention this because I just saw the demo for it. And you reminded me of it.

By the way. Isn't Larabee supposed to have 32 cores? Due out next year or in 2010?
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: Idontcare
You're pretty savvy when it comes to computer technology, programming technology, etc, so no-doubt you are familiar with Gene Amdahl's law and the modifications proposed thereafter by Almasi and Gottlieb to incorporate the performance penalties associated with interprocessor communication overhead.

Thank you, but I mostly only know what I've learned from you, with something occasional thrown in by CTho9305. I did know about Amdahl's law, but this is the first I've heard of Almasi and Gottlieb. And while it makes perfect sense, I don't see it being a reason that it won't be possible to have more cores, as the process node gets smaller, making more space available.

Sure, we're probably coming close to the limits that copper and silicon can provide as far as interconnect speeds (although I don't think we've reached that limit yet), but the only thing needed for better scalability with more cores is better interconnect speeds, right? There are already developments underway to make that connection 100 times faster. Since we've only gone from 66Mhz interconnect speeds to 8GB/sec interconnect speeds with Barcelona, which if my calculations are correct is only ~10x as fast, a 100x improvement would give alot more room for growth in how much data would be able to traverse the interconnects, before becoming interconnect-bound.

Likewise, Beowulf clusters use the network to interconnect. It works great for small clusters, but with only 300-400 Kbit/sec connect rates with an average Mbit network, it's pretty easy to see how they could become interconnect-bound. Obviously using fiber interconnects allows for much higher throughput, like they do with supercomputers like Ranger, so I can't imagine using that same technology inside of the processor wouldn't provide the same boost. Anyway, that's my thoughts on it all. Feel free to let me know why I'm totally off-base in my thought process, like you usually do.:D

Oh, I almost forgot:

The best way to deal with Amdahl's speedup limitations is to put the serial code on a faster "head" node while farming out the parallel code operations to more numerous but "dumber" nodes.

This sounds brilliant. Having one core being faster than the other. I know that in a Beowulf cluster, having your fastest processor be the "parent" does make for a faster cluster, but having different speed cores on one die had never crossed my mind.
 

PlasmaBomb

Lifer
Nov 19, 2004
11,636
2
81
Originally posted by: nerp
Originally posted by: myocardia
Originally posted by: Idontcare
process 45nm -> 8nm in 10yrs (0.18x)

So how does a 16 core, 60billion transistor, 25GHz CPU sound to you?

Assuming they are able to get the process node anywhere near 8nm, don't you think they'll be putting alot more than 16 cores in there? We had 4 cores @ 65nm, and we have 6 @ 45nm, and @ 32nm, we'll have no problem squeezing 8 cores onto each die. At that rate, I'm seeing at least 12 cores per die @ 22nm, and 16 cores @ 17nm. Wouldn't that put us @ 24 cores @ 12nm, and 32 cores @ 8nm?

So how fast will that load p0rn? :)

Fixed for what you were really thinking ;)

Originally posted by: Idontcare

So how long do you think it will be for us to realistically see 8 cores on one die in a consumer/desktop CPU? Another 5 yrs? Might not be too far off on that. Maybe 4 yrs.

Late '09?

Originally posted by: keysplayr2003

By the way. Isn't Larabee supposed to have 32 cores? Due out next year or in 2010?

The current rumour is that the first version of Larrabee is supposed to be a chip with 48 cores. Later versions may have up to 80 cores.
 

magreen

Golden Member
Dec 27, 2006
1,309
1
81
I likes Idontcare. He real smart. Say big stuff.
;)

But seriously, in 10 yrs?

President Palin will be hunting caribou with her 24-core 20GHz computerized rocket launcher.
 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
Are the Larabee cores similar to the stream processors found in either ATi or nVidia GPUs? Because those are where I see the future of computing, for any kind of application that can be run with CUDA/CTM.
 

Brakner

Member
Jul 3, 2005
37
0
0
By 2018 we will be using some sort of AI which will most likely take control of all machines , realize humans are a virus and launch....er that sound sort of familiar, hrm.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,117
3,642
126
Originally posted by: Brakner
By 2018 we will be using some sort of AI which will most likely take control of all machines , realize humans are a virus and launch....er that sound sort of familiar, hrm.

its okey.

We have neo... He is the one...

We just need to find him tho.
 

KIAman

Diamond Member
Mar 7, 2001
3,342
23
81
I predict we will have a paradigm shift on what we consider CPU and its associated technology. We have already moved from faster sequential processing (mhz) to high bandwidth and parallel processing (multi instructions per clock/multi core). Intel and AMD roadmaps show the next several years will show die shrinks with ever increasing amount of cores.

I think the next shift will be on-die peripherals. We will still use silicon technology down to a 12nm architecture and be limited to 12 cores but around the cores will be dedicated memory controller, ethernet, sound processor, IO controller and GPU using 1/5 the power of current CPUs for the high end. The speed might top out around 5Ghz but not much higher. The 50 pound desktop computer will be a thing of the past and computers will take on an "Apple-esc" look with the majority of the space taken by the display and PSU. For the middle to low end CPUs, there will be a 4 core 12nm processor with integrated peripherals running at 1Ghz which will take 5w of power which will be integrated into the next generation of SmartPhones/MiniComputer.


 

StinkyPinky

Diamond Member
Jul 6, 2002
6,986
1,283
126
So we're going to have office workers with 16 core cpu's running Word? What's the bet they'll still complain it's slow?
 

garritynet

Senior member
Oct 3, 2008
416
0
0
I think by then the whole MB/CPU/GC/RAM combo will be a single card. Companies will compete with each other based on featured offered and not performance, like MB companies do with chipsets now. Thats why I think Intel/AMD want in on the chipset/graphics market; room to grow.


 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Let's see.

"Mainstream" meaning non-server PC chips

2009: Nehalem: 4 core mainstream
2010: Westmere: 6 core mainstream
2011: Sandy Bridge: 8 core mainstream?
2012: Ivy Bridge: ?? core mainstream?
2013: Haswell: ?? core mainstream?

Nehalem was thought to bring mainstream 8 core chips but that's not what Intel is planning. The only 8 core version of Nehalem is the Nehalem-EX(Beckton) which is for high end server. Westmere is a 6 core.

Intel had a presentation for future architectures and multi-cores at IDF Spring 2005(http://www.anandtech.com/cpuch...owdoc.aspx?i=2368&p=2). According to them, there will be a point where having multiple of relatively large cores increasing will stop and the architecture will go couple of ways.

Couple of interesting graphis:
-1st pic, "Large on-chip memory subsystem" I think that points out to on-die DRAM.
-2nd picture, 2011-2012 seems to be the last years for "Array of big, OOO IA cores". That last chip could be the Haswell and its derivative.
-3rd picture, "CMP" with 10 cores, Haswell on 16nm might be the "CMP" with 10 cores. After that we might see that "Scalar plus many core" as the tock, eventually moving to 10-100 cores for multi-thread and 1 really large core for single thread architecture.

Intel 2015: http://www.anandtech.com/cpuch...howdoc.aspx?i=2367&p=3
"By 2015 Rattner predicted that Intel CPUs would have 10s or 100s of cores on each die, which in turn would require a lot of memory bandwidth."

Then lets lay out the roadmap
2014: Successor to Haswell, 10 core mainstream
2015: New architecture featuring one large core for single thread and many small cores for multi-thread, featuring large on-die DRAM to reduce latency and increase bandwidth

Interesting years ahead...
 

faxon

Platinum Member
May 23, 2008
2,109
1
81
ooh definitely. the way computer processing is going now, were eventually going to see a complete convergence of everything into one unit, with the exception of audiophile grade sound processing, which due to EMI interference will still come as an external unit. it will be built into an OLED display and have a power supply of some sort on the scale of what you see in current laptop computers, with space for storage devices and portable media readers. expect laptops to turn into something like what you see them using in star trek episodes, where the whole unit is a single tablet with touch screen based input. if need be the unit can be propped up and project a keyboard onto the surface you are using it on (this projector already exsists). the most likely event after that is that were going to move away from CMOS transistors to something else entirely. moores law will be replaced by another one entirely. we will probably also see the adoption of 3D Chip interconnect architectures like what IBM is currently experimenting with in their supercomputers. the biggest performance increase will probably be netted from putting enough cache space on die to fit the entire OS and whatever programs you are running directly into the CPU cache. this would be possible in theory with the adoption of the memristor as a chip component.

and this is just one way it could go. what would happen if we were to create a true 3D interface directly with the computer via a manipulatible 3D display of some sort?
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
I'm not sure if "one size fits all" approach will work. Sure, if people want to sacrifice performance for a smaller PC. That on a tech site really means "End of PC gaming". You aren't gonna fit the GTX280 and Nehalem into one die. Things like Fusion for GPU-on-CPU is for cost saving and power savings, not performance. There will always be 3D apps that require dedicated video cards and CPUs.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: IntelUser2000
I'm not sure if "one size fits all" approach will work. Sure, if people want to sacrifice performance for a smaller PC. That on a tech site really means "End of PC gaming". You aren't gonna fit the GTX280 and Nehalem into one die. Things like Fusion for GPU-on-CPU is for cost saving and power savings, not performance. There will always be 3D apps that require dedicated video cards and CPUs.

Like most technology the drive towards the end product will not be so much due to consumer demand but more due to the threat of competition doing it first (i.e. Pentium dual-core vs. X2) and the subsequent impact their marketing team will have on convincing customers they need the newest tech regardless whether they really do (Phenom vs. Kentsfield, etc).

Will a "one size fits all" approach work? Absolutely, if we are talking about Intel/AMD shareholders and employees. Will it work for consumers? If the marketing teams are worth their pay then yes, yes they will be convinced their systems are inferior if they do not "upgrade" to the latest and greatest.

And why would one make such a all-in-one PC on a chip? Out of fear the other guy is going to and that they are going to do it before you do.

Some of the easiest project requests I made to my bosses came from making vague connections of my project as addressing something our competition was already doing and we were at risk of falling behind on. Budget monies just love to flow from management to close perceptions of fearful gaps with the competition.
 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
Originally posted by: IntelUser2000
I'm not sure if "one size fits all" approach will work. Sure, if people want to sacrifice performance for a smaller PC. That on a tech site really means "End of PC gaming". You aren't gonna fit the GTX280 and Nehalem into one die. Things like Fusion for GPU-on-CPU is for cost saving and power savings, not performance. There will always be 3D apps that require dedicated video cards and CPUs.

I tend to agree with this statement. Although there will be "a" GPU incorporated into the CPU package I doubt it will really replace the discrete GPU for gaming & 3D CAD operations (or really any 3D accelerated application). Today we have integrated graphics (780G/G45/etc) that can handle HD decoding but still completely suck when it comes to gaming (remedial at best).

I don't like the idea of not having discrete GPUs available. What happens when your on-die GPU becomed too slow for your taste? Replace the whole CPU just to improve the GPU performance?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
A japanese made anatomically correct female type personal computer assistant.
And before you say anything... they are really building those.
 

faxon

Platinum Member
May 23, 2008
2,109
1
81
yea but there are ways around that. look at the performance per transistsor of the I7 cpus vs the GTX280 vs CELL in F@H. they are all pretty big on transistors in respect to each other, but have vastly different performance marks in the same application. im willing to bet that anything which can be coded to a GPU (larrabee C, CUDA) WILL be coded to one, and that the realm of CPUs will stay in raw compute power. if your sole goal is to increase the clock speeds for number crunching, there are plenty of ways you can do this while sacrificing performance in other areas considerably (see netburst)