128GHz CPU by the year 2011

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,074
3,577
126
128ghz...

i wonder if i even have that collectively..

3.2 ghz x 12 on my gulftown server. = 38.4
4.5ghz x 6 on my main machine. = 27
2ghz x 4 on my Sammy SAN. = 8
4.5 ghz x 4 on the 2600K = 18
4ghz x 4 on the HTPC = 16
= 107.4ghz
U know what i do have enough ghz to get 128 ghz collectively... if i include my office machines.. but personally i use at most 107.4ghz is at my raw disposal..

128ghz is just insane and this is adding it up collectively...
thats up in Mark's territory with F@H.
 
Last edited:

birthdaymonkey

Golden Member
Oct 4, 2010
1,176
3
81
My favourite post:

"Rest assured that the higher level skills required to operate our high performance rigs allows PC users greater success with the oppposite sex whereas women know mac users can merely point and click."

I know it drives my ladyfriend mad with desire when I won't come to bed cuz I'm playin' with muh rig.
 
Last edited:
Dec 30, 2004
12,553
2
76
That still doesn't mean we won't get a 10 GHz CPU in the future. Eventually you hit as much of a wall with parallelization as you do when trying to ramp up the clock speed. It might be a while before we get there, but sooner or later, increasing clock speeds is going to be the path of least resistance to get better performance.

what??? why not?? I don't understand.
 
Last edited:

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
TILE-Gx Processors have 100 cores that are in the range of 1GHz to 1.5GHz

TILE-Gx100™
100 x 1.2,1.5 = 150GHz of possible throughput if parallelization is not a problem
 

mrjoltcola

Senior member
Sep 19, 2011
534
1
0
what??? why not?? I don't understand.

Due to diminishing returns of parallelism for many / most classes of code.

For general purpose use, parallelism reaches a point of zero return quickly, where 8 is as good as 12 is as good as 16 is as good as 32 cores (for vast majority of use - rendering is the most common mainstream application that is exempted here).

Certain algorithms and software parallelizes very well, ie. rendering, where we break the render down into regions, or simulations, where we can break the overall load into separate threads, ai, rendering, IO. Or databases, web, and mail servers where more users can be served.

But a lot more algorithms only go faster with faster linear (sequential) processing. Any code block that depends on other code blocks, cannot be parallelized, or at least outside of limited prediction (predictive branching is effecive, but only to a point). So that is why, at some point we'll exhaust the reasonable performance yield of multiple cores, and have to look back to clock speed for the gains, or yet other ways of accelerating single thread throughput.

To a point, software can get better. But I'd argue that most software is pretty darned decent right now, as far as multi-core. We've been using threads in programming for a couple of decades, and teaching threads in Computer Science school for at least half that. So people who blog and postulate that software just needs to get better to allow us to keep advancing with multi-core most often aren't really informed enough, IMO.
 

SHAQ

Senior member
Aug 5, 2002
738
0
76
That still doesn't mean we won't get a 10 GHz CPU in the future. Eventually you hit as much of a wall with parallelization as you do when trying to ramp up the clock speed. It might be a while before we get there, but sooner or later, increasing clock speeds is going to be the path of least resistance to get better performance.

10 Ghz is nothing once they move away from silicon.
 

grkM3

Golden Member
Jul 29, 2011
1,407
0
0
intel could hit 10 ghz now,there netburst was way ahead of its time and there process could not scale well.

a 22nm trigate p4 would hit 10ghz on air if made today by intel
 

lol123

Member
May 18, 2011
162
0
0
Due to diminishing returns of parallelism for many / most classes of code.

For general purpose use, parallelism reaches a point of zero return quickly, where 8 is as good as 12 is as good as 16 is as good as 32 cores (for vast majority of use - rendering is the most common mainstream application that is exempted here).

Certain algorithms and software parallelizes very well, ie. rendering, where we break the render down into regions, or simulations, where we can break the overall load into separate threads, ai, rendering, IO. Or databases, web, and mail servers where more users can be served.

But a lot more algorithms only go faster with faster linear (sequential) processing. Any code block that depends on other code blocks, cannot be parallelized, or at least outside of limited prediction (predictive branching is effecive, but only to a point). So that is why, at some point we'll exhaust the reasonable performance yield of multiple cores, and have to look back to clock speed for the gains, or yet other ways of accelerating single thread throughput.

To a point, software can get better. But I'd argue that most software is pretty darned decent right now, as far as multi-core. We've been using threads in programming for a couple of decades, and teaching threads in Computer Science school for at least half that. So people who blog and postulate that software just needs to get better to allow us to keep advancing with multi-core most often aren't really informed enough, IMO.
Finally a sensible post in these forums.