• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

If CPU speeds double every 18 months...

If CPU speeds double every 18 months why is it that Intel and or AMD can't predict the methods of increasing the clock speeds and break moors law?
 
rolleye.gif
, it dosen't work like that buddy.
 
I've always kind of wondered this. I mean, we all know that 5-10 years from now, there will probably be 5 or 6 or maybe 10 GHZ processors on the market....... if intel knows that eventually they will have 10GHz processors, why can't they just skil 4-9 and go straight for 10? I know why they WON'T and DON'T..... but why CAN'T they?"
 
Originally posted by: acid16
I've always kind of wondered this. I mean, we all know that 5-10 years from now, there will probably be 5 or 6 or maybe 10 GHZ processors on the market....... if intel knows that eventually they will have 10GHz processors, why can't they just skil 4-9 and go straight for 10? I know why they WON'T and DON'T..... but why CAN'T they?"
It's basically due to technology/money restraints. In order to revamp chip fabrication costs a lot of money. But in order to make chips faster, a smaller chip must be created utilizing smaller UV light. However, when the light gets too small, it will be absorbed by the air molecules before anything can occur. Certainly companies like Intel and AMD have plans on making faster chips, but currently the money or technology is not available to do so.

~Aunix

 
Originally posted by: CallTheFBI
If CPU speeds double every 18 months why is it that Intel and or AMD can't predict the methods of increasing the clock speeds and break moors law?

LOL, the old "break the code" idea.

Well, here's a few reasons why.

Intel and AMD don't just sit on their @$$es and say "hmm, time to release a faster chip". They spend billions on R&D coming up with the technologies to make faster chips and incorporate these in new steppings. Sometimes, a new discovery is so potent that they even reveal this to the public.

A few technologies like this are: BUBL, organic CPU packaging, the IHS, ways of implimenting 0.25um, 0.18um, 0.13um, 0.09um (90nm), etc...

They combine these technologies in their new chips.

I see what you're saying though, why don't they just skip 0.09 um, for example (ie Prescott) and try to make a 0.03um core or something like that.

There are a lot of problems they have to iron out for each stepping - notice every stepping runs at a lower core voltage, and yet draw more current (Amps) and dissipate more heat (in watts... current CPU's are approaching 100W!). The problem is that at too low a voltage, a CPU can't function (you need a set voltage to run a specific frequency, give or take a bit). However, the faster a CPU runs (MHz, GHz, etc), the more the signal quality degrades between the transistors.... how do you increase signal quality? By upping the voltage! But too high a voltage will fry the core! So they need to find a balance for every stepping. Then they have to figure out (ie invent) a new way to pull all of this heat away from the core!

You can see that every time they shrink the die, a new set of problems and challenges occur, or rather the old ones come back with a vengeance! And this is a big simplification, too (I really don't understand the really, really technical stuff yet).
 
10 years from now, hmm id say were over 50Ghz by then, or at least one thats the equivalent multiple in ability compared to what we are using now.
 
naw, I doubt we'll continue with a single chip running at insane speeds. Tech like IBM's cells seems very intriguing and seems to have more promise for the future. Dozens of chips working at 1GHz + instead of one chip running at 10+Ghz...
 
Originally posted by: bunnyfubbles
naw, I doubt we'll continue with a single chip running at insane speeds. Tech like IBM's cells seems very intriguing and seems to have more promise for the future. Dozens of chips working at 1GHz + instead of one chip running at 10+Ghz...

If that's so effective, why haven't other people started with this? There are SMP systems, but they're not twice the speed in a dual system, how would cells be different, because unless you had like one "main" processor that split all the threads up somehow, you wouldn't be able to effectively use all the cells to produce that much more performance (cell of 4 x 1GHz's things would surely be similar to a multi proc system with, say, 4 Xeons or whatever) unless they totally changed things somehow?
 
It is new technology, obviously the way things are right now support single CPU systems. Softward and supporting hardware alike. All I'm saying is that it would be easier to impliment several CPUs working together than to have just one really fast one. Software and hardware would have to evolve to support such a system, imo it just seems to be more promising.
 
Originally posted by: DX2Player
10 years from now, hmm id say were over 50Ghz by then, or at least one thats the equivalent multiple in ability compared to what we are using now.


I fully agree with this to an extent, They will be atleast 50Ghz or atleast 50 times faster then current by then.
 
Originally posted by: Lonyo
Originally posted by: bunnyfubbles
naw, I doubt we'll continue with a single chip running at insane speeds. Tech like IBM's cells seems very intriguing and seems to have more promise for the future. Dozens of chips working at 1GHz + instead of one chip running at 10+Ghz...

If that's so effective, why haven't other people started with this? There are SMP systems, but they're not twice the speed in a dual system, how would cells be different, because unless you had like one "main" processor that split all the threads up somehow, you wouldn't be able to effectively use all the cells to produce that much more performance (cell of 4 x 1GHz's things would surely be similar to a multi proc system with, say, 4 Xeons or whatever) unless they totally changed things somehow?
I'm not certain what you're asking here and my reply doesn't address the original question, but I put in bold the parts of your post that made me go huh? Supercomputers use microprocessor arrays just as was mentioned so it is effective and everyone is doing it 😉 and the IBM ASCI White has 8,192 copper microprocessors, 6.2 terabytes memory and 512 RS/6000 375 MHz POWER3 SMP High Nodes and that harware needs updating as there are now Faster Supercomputers :Q and they are capable of running serial, symmetric multiprocessor (SMP) and parallel workloads so the statement "unless you had like one "main" processor that split all the threads up somehow, you wouldn't be able to effectively use all the cells to produce that much more performance" just doesn't hold up.
 
Originally posted by: DAPUNISHER
Originally posted by: Lonyo
Originally posted by: bunnyfubbles
naw, I doubt we'll continue with a single chip running at insane speeds. Tech like IBM's cells seems very intriguing and seems to have more promise for the future. Dozens of chips working at 1GHz + instead of one chip running at 10+Ghz...

If that's so effective, why haven't other people started with this? There are SMP systems, but they're not twice the speed in a dual system, how would cells be different, because unless you had like one "main" processor that split all the threads up somehow, you wouldn't be able to effectively use all the cells to produce that much more performance (cell of 4 x 1GHz's things would surely be similar to a multi proc system with, say, 4 Xeons or whatever) unless they totally changed things somehow?
I'm not certain what you're asking here and my reply doesn't address the original question, but I put in bold the parts of your post that made me go huh? Supercomputers use microprocessor arrays just as was mentioned so it is effective and everyone is doing it 😉 and the IBM ASCI White has 8,192 copper microprocessors, 6.2 terabytes memory and 512 RS/6000 375 MHz POWER3 SMP High Nodes and that harware needs updating as there are now Faster Supercomputers :Q and they are capable of running serial, symmetric multiprocessor (SMP) and parallel workloads so the statement "unless you had like one "main" processor that split all the threads up somehow, you wouldn't be able to effectively use all the cells to produce that much more performance" just doesn't hold up.

Isn't a cell system like a multi-processor system, and if it is, then surely most apps which don't really use this (home apps/games) wouldn't benifit much would they?
 
You're constraining your speculation/thought processes to the desktop computing area and there's really no way beyond companies road maps to see where it'll go so bunnyfubbles speculation concerning the adaptation of some supercomputing technologies for the desktop market has as much validity as anyone's blanket statements concerning 50ghz single microprocessors IMO, and since DNA computing and quantum computing are already taking their first baby steps it's quite possible computing will make a radical departure from Moore's law in the not too distant future.

EDIT: BTW, for those speculating on 50ghz microprocessors, do you think you'll be lookin' at a huge system bottleneck because of your storage medium? 😛😉
 
Originally posted by: DAPUNISHER
You're constraining your speculation/thought processes to the desktop computing area and there's really no way beyond companies road maps to see where it'll go so bunnyfubbles speculation concerning the adaptation of some supercomputing technologies for the desktop market has as much validity as anyone's blanket statements concerning 50ghz single microprocessors IMO, and since DNA computing and quantum computing are already taking their first baby steps it's quite possible computing will make a radical departure from Moore's law in the not too distant future.

EDIT: BTW, for those speculating on 50ghz microprocessors, do you think you'll be lookin' at a huge system bottleneck because of your storage medium? 😛😉

no, we should probably have faster storage by then too (on a consumer level)...what about solid state?
 
Originally posted by: MrDudeMan
Originally posted by: DAPUNISHER
You're constraining your speculation/thought processes to the desktop computing area and there's really no way beyond companies road maps to see where it'll go so bunnyfubbles speculation concerning the adaptation of some supercomputing technologies for the desktop market has as much validity as anyone's blanket statements concerning 50ghz single microprocessors IMO, and since DNA computing and quantum computing are already taking their first baby steps it's quite possible computing will make a radical departure from Moore's law in the not too distant future.

EDIT: BTW, for those speculating on 50ghz microprocessors, do you think you'll be lookin' at a huge system bottleneck because of your storage medium? 😛😉

no, we should probably have faster storage by then too (on a consumer level)...what about solid state?
I was being quasi-sarcastic so it probably slipped under your sarcasm detectors' range 🙂
 
actually i agree with Bunny! :Q

dual/quad mode everything is gonna hafta be the future.
i mean how much can we get out of a "single" chip or drive?
pretty soon the cpus are gonna be so small we wont be able to cool them.

the only answer is dual/quad mode cpus, ram, drives, gpu, etc.

actually it sounds like alot of fun 😀
imagine being able to upgrade anything at will just by adding one more to its "array"? :Q 😉 🙂
 
Originally posted by: THUGSROOK
actually i agree with Bunny! :Q

dual/quad mode everything is gonna hafta be the future.
i mean how much can we get out of a "single" chip or drive?
pretty soon the cpus are gonna be so small we wont be able to cool them.

the only answer is dual/quad mode cpus, ram, drives, gpu, etc.

actually it sounds like alot of fun 😀
imagine being able to upgrade anything at will just by adding one more to its "array"? :Q 😉 🙂
LOL, yeah, we got Bunny's back 😛
BTW, you thought you had to address power&cooling needs now, wait for your 50ghz CPU 😉
rolleye.gif
 
sigs of the future....

4x P5 3.0 @ 5.0ghz each | 4x 300fsb | 4x 600qcddr
Asus P5-4X | 4x XMS4200 | 2x GF6000 | 1x Riva TNT

im lacking a little by only running 3 vid cards 😉

hehe 😀
 
I think a cluster setup (parallel processing) will be the way, and software developers will just have to start compiling the porgrams to take advantage of the multiple processors...


I think the parallel processing has been used on mostly large govermental types of systems, but basically take any procedure and break it down it 4 smaller procedures and allow each processor to work on that and come out the other end have it reassebled....Been a long time since I read about something like it....
 
Originally posted by: THUGSROOK
oh common Mikki ~ the TNT is for 2d mode and the 2 gf6000s are in sli mode!

like the old voodoo setups 😉
mmmmm....I'm going with the Matrox Parhelia VII for the 2d....😛
 
A move from a single CPU to emulate what would be the equivalent processing power of current MHz scaled to roughtly 50Ghz (Intel) 10 years from now is not a verifyable necesity. Unforseen technological advancements by means of increased archatecture efficentcy undoubtably will lead to say 20GHz chips that equal current theoredical 50GHz chips. Most prominent and immediate example would be AMD to Intel relationship. Although this is not to say that Multiple procesors will not happen, its more likley we will see a hybrid of the two, at least at first. Maybe a mix between duel procesors, hyperthreding like technology, and 16bit to 32bit RDRAM like inovations.
 
Back
Top