Are we ever going to reach an equilibrium of sorts where CPU speed becomes irrelevent?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Regs

Lifer
Aug 9, 2002
16,666
21
81
Originally posted by: Duvie
I agree that is some instances we are getting to a point that cpus are starting to get limited....IO limitations are starting to make cpu speed less relevant. I have many apps that with 4 cores I am limited even in a Raid setup....

If we move to 8 cores we are going to see major limitations unless other areas of technology keep pace...

The hard drive has been a major bottleneck for god knows how long. Hopefully the new Solid State hard drives or even RAMDISK flash drives in the future could be something worth our while when they can fill up an entire SATA channels bandwidth.
 

Duvie

Elite Member
Feb 5, 2001
16,215
0
71
Originally posted by: Regs
Originally posted by: Duvie
I agree that is some instances we are getting to a point that cpus are starting to get limited....IO limitations are starting to make cpu speed less relevant. I have many apps that with 4 cores I am limited even in a Raid setup....

If we move to 8 cores we are going to see major limitations unless other areas of technology keep pace...

The hard drive has been a major bottleneck for god knows how long. Hopefully the new Solid State hard drives or even RAMDISK flash drives in the future could be something worth our while when they can fill up an entire SATA channels bandwidth.



They have been but even then I have always been able to run even todays fastest dual cores at 100% SUSTAINED LOAD...Now with quad cores at 3.2ghz I can only do that by lowering the speed, using Raid and multiple drive configurations, and using HD codecs that already require more cpu crunching. Regular Mpeg2 DVD crunching is not enough work anympre. that has long past the point of real time encoding and now the HDD is holding up the process in its reading and writing.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
The misinformation in this thread is horrible.

The speed of light is related to computation in that signals have to travel some distance across a chip for the various pieces to communicate. It's often convenient for the signals to take at most a full clock cycle worth of time to do this, which means a chip whose longest length is 1cm could be clocked no higher than 30 GHz. In reality, you would allow paths that took more than one clock cycle if you wanted to run faster. Of course, you'd need to allocate some of the cycle for the sender to create the light pulse and the receiver to receive it, and we don't even have optical interconnect on chips yet anyway. If you want to do any logic, that comes out of your time budget too. The wavelength of light doesn't really come into play here (I'm sure it matters... multi-meter radio wavelengths would probably be a bad idea... but it's not the stuff BitByBit was talking about).

Now, as I said, we don't currently use light for sending signals. We don't even use the fastest way of signaling electrically. Wires are effectively chains of resistors and capacitors; you send a 1 by charging the capacitors up, and a 0 by discharging them. An easy way to think about this is a trough or wide pipe, which you fill with water or empty of water (the capacitance is basically how much water it takes to raise/lower the water level, and the resistance is whatever slows the flow of the water into / through the trough). With this method, you can currently send a signal all the way across a modern CPU in a few cycles at clock frequencies of a few GHz. The signals propagate incredibly slowly relative to the speed of light.

A faster way to send signals (which I don't understand well) is transmission line signaling - imagine the same trough of water filled up half way; to send a signal you just create a wave on the water's surface. Signaling this way gets you much closer to the speed of light, but unfortunately the sending and receiving circuits are physically gigantic relative to the normal stuff, and it's more complex (for example, you have to worry about waves being reflected from the receiver, just like waves in a trough of water). If you read and understand this paper, you'll know more about this than I do (so explain it to me :)). Look at Figure 3 in the PDF to understand just how large this stuff is compared to normal systems.

Getting back to the wavelength of light, it comes in to play when you're manufacturing chips, rather than when you're using them. Basically, you can draw nice and sharp lines and shapes with light as long as the wavelength of the light is smaller than the shapes you're trying to draw. When you want to draw something smaller than the wavelength of the light, you run into problems - it gets hard to control widths, and you have to resort to a lot of tricks to get a useful result. Right now, the industry draws 65/45nm features on chips using 193nm light. The nice rectangles we draw when doing layout show up as blobs. Intel has spent years and huge amounts of money trying to get the lithography process working with 13nm light, but I guess the results haven't been great.

Keep in mind that the speed of light is a measure of how far light goes in a given amount of time, and it's distinct from the frequency of the clock in a CPU. It's handy to call the "clock frequency" of a chip the "clock speed", but the word "speed" is being used differently in that sense than it is when we're talking about the "speed of light". "Clock speed" is a count of clock ticks in a second; "light speed" is measuring the distance light travels in a second.

Coming to the frequency of light, it's not really related to the clock frequency either. Radio waves have frequencies in the range of 1MHz (for AM radio), 100MHz (for FM radio), to 2.4GHz (wireless networks, microwaves) and higher. Visible light is in the hundred-terahertz+ range. UV light, x-rays, and gamma rays go from the petahertz to the exahertz ranges. We can generate all of these waves easily, but they're really unrelated to the frequency of the clock in a CPU.

The fastest clock frequency (assuming we want to send the clock as an electrical signal rather than light) we could theoretically generate right now is probably around 500GHz (based on IBMs most recent BJTs), but that's not useful for a CPU, because you need to do a certain amount of work in a clock cycle, and a single gate takes many picoseconds to produce its output. You'd also have a heck of a hard time distributing the 500GHz clock around the chip for reasons discussed above.
 

zky

Junior Member
Dec 30, 2006
1
0
0
the cpu (speed) will always not be sufficient as oppose to the softwares or games!

let say one day a new 100-core cpu comes equipped with 20000mhz FSB and 100gb L2, just one might expect there will be some 3000 bit OS and some games that required this amount of power to be playable.

as more powerful CPUs & GPUs made available, there will always be games and softwares that required its power to be executed.

to make long story short. take Oblivion or F.E.A.R and shove it into a rig features Pentium II and TNT2 and well you dont really need a PhD to figure this out. as a gamer you already foresee the answer.
 

Furen

Golden Member
Oct 21, 2004
1,567
0
0
Wow, I don't see how this thread got into physical esoterica so early on...

Here's my opinion, and I'll stay away from things I don't fully understand:

15 years ago CPUs were something like 500 times slower than current CPUs yet we still got work done with them. Why is it that CPU speeds are not irrelevant RIGHT NOW? Quite simply, this is because we are using our computers to solve more complex problems than we did back then (like running the latest version of Windows). Whenever we have spare "compute capacity" we find ways to utilize it, perhaps by doing things we would consider trivial back when resources were limited, or perhaps doing things that are above and beyond what we could achieve before.

The only way CPU "speed" would become irrelevant is if we were able to make an infinitely wide CPU, with infinite memory, caches, bandwidth, etc (yeah, gotta love infinity). This, in turn, would allow us to calculate every independent operation simultaneously and would also allow us to calculate any operations with dependencies at the same time. Such a thing is, of course, not attainable, and anything that attempts to work with this principle would be insanely inefficient. More likely, we'll see gradual progress and we'll continue seeing software "keep up" with this progress, which will, in turn, make us need yet more CPU power. Most of what a computer does is invisible to the user, so we're not a limiting factor in this regard.

For video cards it's a different matter altogether. While we're not even close to lifelike, as we still render with polygons and rasterizers, there will eventually be a point where we can achieve what the physical limits of our physiology allows. After this point we won't be able to better video cards anymore but I'm sure we'll be working on something else by then (at this point we'll probably be plugging our video cards directly into our nervous system or something like that).
 

nyker96

Diamond Member
Apr 19, 2005
5,630
2
81
I doubt it, if anything CPU are set to replace even gcards so its speed will ever more important.
 

Hulk

Diamond Member
Oct 9, 1999
5,138
3,726
136
Originally posted by: CTho9305
The misinformation in this thread is horrible.

The speed of light is related to computation in that signals have to travel some distance across a chip for the various pieces to communicate. It's often convenient for the signals to take at most a full clock cycle worth of time to do this, which means a chip whose longest length is 1cm could be clocked no higher than 30 GHz. In reality, you would allow paths that took more than one clock cycle if you wanted to run faster. Of course, you'd need to allocate some of the cycle for the sender to create the light pulse and the receiver to receive it, and we don't even have optical interconnect on chips yet anyway. If you want to do any logic, that comes out of your time budget too. The wavelength of light doesn't really come into play here (I'm sure it matters... multi-meter radio wavelengths would probably be a bad idea... but it's not the stuff BitByBit was talking about).

Now, as I said, we don't currently use light for sending signals. We don't even use the fastest way of signaling electrically. Wires are effectively chains of resistors and capacitors; you send a 1 by charging the capacitors up, and a 0 by discharging them. An easy way to think about this is a trough or wide pipe, which you fill with water or empty of water (the capacitance is basically how much water it takes to raise/lower the water level, and the resistance is whatever slows the flow of the water into / through the trough). With this method, you can currently send a signal all the way across a modern CPU in a few cycles at clock frequencies of a few GHz. The signals propagate incredibly slowly relative to the speed of light.

A faster way to send signals (which I don't understand well) is transmission line signaling - imagine the same trough of water filled up half way; to send a signal you just create a wave on the water's surface. Signaling this way gets you much closer to the speed of light, but unfortunately the sending and receiving circuits are physically gigantic relative to the normal stuff, and it's more complex (for example, you have to worry about waves being reflected from the receiver, just like waves in a trough of water). If you read and understand this paper, you'll know more about this than I do (so explain it to me :)). Look at Figure 3 in the PDF to understand just how large this stuff is compared to normal systems.

Getting back to the wavelength of light, it comes in to play when you're manufacturing chips, rather than when you're using them. Basically, you can draw nice and sharp lines and shapes with light as long as the wavelength of the light is smaller than the shapes you're trying to draw. When you want to draw something smaller than the wavelength of the light, you run into problems - it gets hard to control widths, and you have to resort to a lot of tricks to get a useful result. Right now, the industry draws 65/45nm features on chips using 193nm light. The nice rectangles we draw when doing layout show up as blobs. Intel has spent years and huge amounts of money trying to get the lithography process working with 13nm light, but I guess the results haven't been great.

Keep in mind that the speed of light is a measure of how far light goes in a given amount of time, and it's distinct from the frequency of the clock in a CPU. It's handy to call the "clock frequency" of a chip the "clock speed", but the word "speed" is being used differently in that sense than it is when we're talking about the "speed of light". "Clock speed" is a count of clock ticks in a second; "light speed" is measuring the distance light travels in a second.

Coming to the frequency of light, it's not really related to the clock frequency either. Radio waves have frequencies in the range of 1MHz (for AM radio), 100MHz (for FM radio), to 2.4GHz (wireless networks, microwaves) and higher. Visible light is in the hundred-terahertz+ range. UV light, x-rays, and gamma rays go from the petahertz to the exahertz ranges. We can generate all of these waves easily, but they're really unrelated to the frequency of the clock in a CPU.

The fastest clock frequency (assuming we want to send the clock as an electrical signal rather than light) we could theoretically generate right now is probably around 500GHz (based on IBMs most recent BJTs), but that's not useful for a CPU, because you need to do a certain amount of work in a clock cycle, and a single gate takes many picoseconds to produce its output. You'd also have a heck of a hard time distributing the 500GHz clock around the chip for reasons discussed above.


Whatever. Someone at the beginning of the thread was talking about the possibility of using light in processors.


 

Black69ta

Junior Member
Jul 12, 2006
14
0
0
Intel demonstrated a silicon onchip laser for communctions probably FSB first in Desttop environments but basiscally it worked like a FM radio in that there is a microlaser creating a light channel between chips and another Laser pumps the "carrier" laser to modulate it much like a FM radio modulate radio frequency to transmit Audio this is mainly to provide higher bandwith and eliminate distance bottlenecks associated with copper traces. Think Copper Telephone Vs. Fiber Optics only mirconized.
 

Black69ta

Junior Member
Jul 12, 2006
14
0
0
On a side note everyone told Chuck Yeager that noone could fly faster than the speed of sound but he did and now jets fly upto about 3.5x the speed of sound. it all a matter of Technology. moving really fast from point A to point B isn't the only way to get there fast. make piont B closer to point A and get there soon also. When i was a kid a read a book about the 5th diminsion. Basically 1st is two points in Space defining a line or 1D object, 2nd is another point defining an area plane or a 2D object, 3rd is a fourth point taking that flat surface and giving it depth hence 3D oject. The 4th Diminsion adds time to the mix. And the 5th described a way to fold space so that A and B touched "cheating" on distance.
Star trek I think used this premise in their "warp" drive "folded" the space around the ship making it appear to be travelling faster when it was actually only travelling a shorter distance.
 

Gatt

Member
Mar 30, 2005
81
0
0
Originally posted by: zky
the cpu (speed) will always not be sufficient as oppose to the softwares or games!

let say one day a new 100-core cpu comes equipped with 20000mhz FSB and 100gb L2, just one might expect there will be some 3000 bit OS and some games that required this amount of power to be playable.

as more powerful CPUs & GPUs made available, there will always be games and softwares that required its power to be executed.

to make long story short. take Oblivion or F.E.A.R and shove it into a rig features Pentium II and TNT2 and well you dont really need a PhD to figure this out. as a gamer you already foresee the answer.

Actually, this isn't true.

First, budgets are already a serious problem, they're so high that if a game doesn't sell the Studio is pretty much finished.

Second, there's only so far you need to take it and no further. Photorealistic is the max, and then it's done. Physics, while currently processor hungry, have a much nearer end point than graphics. Photorealism itself is largely unnecessary, as modeling each leaf and it's independent movement is an absolute waste of time and funding for what it would add to a game.

If games require increasing horsepower for 10 more years it'll be a miracle. Odds are good that budget issues and reaching the point of "It adds nothing to the game" will probably arrive in 5 years.

I've gotta reiterate. Sure, we could pursue it until we can render every blade of grass and bug independently with it's own physics, but what would be the point? It adds absolutely nothing to the game, but eats enourmous amounts of manpower and cash.

 

erikvanvelzen

Junior Member
Jan 1, 2005
22
0
0
Actually it is possible to transmit data faster than the speed of light (without bending space).

Imagine a pipe inside another pipe. now you give the inner pipe a push on one end. Assuming that the material is completely solid, the other end will recieve the push instantanious. The data is transportet faster than the speed of light.
 

Roguestar

Diamond Member
Aug 29, 2006
6,045
0
0
Originally posted by: erikvanvelzen
Actually it is possible to transmit data faster than the speed of light (without bending space).

Imagine a pipe inside another pipe. now you give the inner pipe a push on one end. Assuming that the material is completely solid, the other end will recieve the push instantanious. The data is transportet faster than the speed of light.

Except that's completely wrong, because it takes time for the molecules at your end to bump into the molecules beside them and so on and so on until it reaches the other end and the wave has been transmitted.
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
Originally posted by: erikvanvelzen
Actually it is possible to transmit data faster than the speed of light (without bending space).

Imagine a pipe inside another pipe. now you give the inner pipe a push on one end. Assuming that the material is completely solid, the other end will recieve the push instantanious. The data is transportet faster than the speed of light.

Agreed with Roguestar, it is currently impossible (with current technology) to break the speed of like. And like the guy above me said, on a molecular level when you push that rod, even if it is only a rod of 2 chemically bond atoms, it will still take a bit of time for the other end to move just enough time to make it slower then the speed of light, not instantaneously.
 

Roguestar

Diamond Member
Aug 29, 2006
6,045
0
0
In fact his suggestion is actually a good way of visualising what happens in current processors. When a current passes through a wire and we're taking the example of one electron passes out the end, point B, at point A where the signal is transmitted from it's not the very first electron that is energised that comes out at point B, it passes its energy along to the next, and to the next, until the wave of tranmission reaches the end where an electron at the end at point B pops out. The speed of an actual electron physically moving is very very small because they cover such small distances themselves; it is not them which move but the current, the transmission of energy.