Originally posted by: Idontcare
Originally posted by: magnumty
i cant even imagine, 8 cores, 16 cores? it's gonna be wicked.
Well where were we 10yrs ago?
http://everything2.com/index.pl?node_id=1362904
1998 was the year of the 250nm Pentium 2 (Deschutes) and K6-2 (400MHz and 9.4m xtors).
Now we have 45nm quadcore chips with ~730m xtors.
1998-2008
cores 1 -> 4 in 10yrs (4x)
process 250nm -> 45nm in 10yrs (0.18x)
xtors 9m -> 730m in 10yrs (81x)
clockspeed 400MHz -> 3.2GHz (8x)
2008-2018 projected
cores 4 -> 16 in 10yrs (4x)
process 45nm -> 8nm in 10yrs (0.18x)
xtors 730m -> 59.2b (59,200m) in 10yrs (81x)
clockspeed 3.2MHz -> 25.6GHz (8x)
So how does a 16 core, 60billion transistor, 25GHz CPU sound to you?
(what would you do with 60 billion xtors? well you'd need that much to support about 1.5GB of on-die cache and have a few billion left over for your 16 cores of logic)
I likes the idea of such a processor, me hates the prospects of waiting 10yrs to gets.
Originally posted by: myocardia
Originally posted by: nerp
So how fast will that load Photoshop?![]()
Even Photoshop CS4, which was just released a day or two ago, doesn't benefit from quads. Maybe by then, though, a quad will be faster than a dual-core, in CS18. I'm not holding my breath, though.![]()
Originally posted by: Idontcare
You're pretty savvy when it comes to computer technology, programming technology, etc, so no-doubt you are familiar with Gene Amdahl's law and the modifications proposed thereafter by Almasi and Gottlieb to incorporate the performance penalties associated with interprocessor communication overhead.
The best way to deal with Amdahl's speedup limitations is to put the serial code on a faster "head" node while farming out the parallel code operations to more numerous but "dumber" nodes.
Originally posted by: nerp
Originally posted by: myocardia
Originally posted by: Idontcare
process 45nm -> 8nm in 10yrs (0.18x)
So how does a 16 core, 60billion transistor, 25GHz CPU sound to you?
Assuming they are able to get the process node anywhere near 8nm, don't you think they'll be putting alot more than 16 cores in there? We had 4 cores @ 65nm, and we have 6 @ 45nm, and @ 32nm, we'll have no problem squeezing 8 cores onto each die. At that rate, I'm seeing at least 12 cores per die @ 22nm, and 16 cores @ 17nm. Wouldn't that put us @ 24 cores @ 12nm, and 32 cores @ 8nm?
So how fast will that load p0rn?![]()
Originally posted by: Idontcare
So how long do you think it will be for us to realistically see 8 cores on one die in a consumer/desktop CPU? Another 5 yrs? Might not be too far off on that. Maybe 4 yrs.
Originally posted by: keysplayr2003
By the way. Isn't Larabee supposed to have 32 cores? Due out next year or in 2010?
what cpu's will we be using in 10 years?
Originally posted by: Brakner
By 2018 we will be using some sort of AI which will most likely take control of all machines , realize humans are a virus and launch....er that sound sort of familiar, hrm.
Originally posted by: IntelUser2000
I'm not sure if "one size fits all" approach will work. Sure, if people want to sacrifice performance for a smaller PC. That on a tech site really means "End of PC gaming". You aren't gonna fit the GTX280 and Nehalem into one die. Things like Fusion for GPU-on-CPU is for cost saving and power savings, not performance. There will always be 3D apps that require dedicated video cards and CPUs.
Originally posted by: IntelUser2000
I'm not sure if "one size fits all" approach will work. Sure, if people want to sacrifice performance for a smaller PC. That on a tech site really means "End of PC gaming". You aren't gonna fit the GTX280 and Nehalem into one die. Things like Fusion for GPU-on-CPU is for cost saving and power savings, not performance. There will always be 3D apps that require dedicated video cards and CPUs.
