Are CPUs advancing faster than software requires?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

bronxzv

Senior member
Jun 13, 2011
460
0
71
CPU's are advancing on an exponential scale (a doubling every two years), whereas software development is linear (adding x million lines of code every year).

the number of lines of code isn't involved here, you can change a single constant in your code (for example the number of iterations of a critical loop) and make your application trillions of times more CPU demanding
 

beginner99

Diamond Member
Jun 2, 2009
5,320
1,767
136
No, it isn't. Windows 7 is more like a service pack than a new OS iteration. Explorer got some functionality upgrades, but the kernel is hardly different. The requirements for Vista, 7, and 8 are the same because the kernel hasn't undergone a major change.

Windows 2000 to XP was a more significant change than Vista to 7.

As for your Vista machine that magically became fast: Go put Vista SP2 with the feature pack on it. Use updated drivers. Observe that it runs just as well as 7. Bench it if you must.

Agree. Had an off-the-shelf cheap Vista PC for about 1 Year. the difference between Vista and 7 is marginal especially in terms of UI. Also said PC had low specs but it worked fine for daily usage. Never understood why vista= ultra bad and 7=super good as they are so similar.
 

PaGe42

Junior Member
Jun 20, 2012
13
0
0
the number of lines of code isn't involved here, you can change a single constant in your code (for example the number of iterations of a critical loop) and make your application trillions of times more CPU demanding

And you won't sell a single copy. Of course you can bring down any CPU with special software, but that is not my point. My point is that by nature software advances at a slower rate than hardware. And that, for most cases, software is no longer restrained by hardware.
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,079
3,916
136
And you won't sell a single copy. Of course you can bring down any CPU with special software, but that is not my point. My point is that by nature software advances at a slower rate than hardware. And that, for most cases, software is no longer restrained by hardware.

if CPU''s are advancing so fast, why are we needing to go to wider and wider vectors to increase throughput ( adding complexity). CPU performance has plateaued and increased software complexity, thats why your hardware isn't being pushed, because its hard to scale to 256bits worth of 32bit data, its hard to Utilize high core counts.
 
Last edited:

PaGe42

Junior Member
Jun 20, 2012
13
0
0
if CPU''s are advancing so fast, why are we needing to go to wider and wider vectors to increase throughput ( adding complexity). CPU performance has plateaued and increased software complexity, thats why your hardware isn't being pushed, because its hard to scale to 256bits worth of 32bit data, its hard to Utilize high core counts.

So essentially you agree: software is not keeping up with advancing hardware.

Yes it is hard to scale the software. If it is has taken you ten years to fill current hardware and the hardware capacity then doubles two years later, you would have only those two years to add the same amount of software to fill up again. And it gets worse after that...
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,079
3,916
136
So essentially you agree: software is not keeping up with advancing hardware.

Yes it is hard to scale the software. If it is has taken you ten years to fill current hardware and the hardware capacity then doubles two years later, you would have only those two years to add the same amount of software to fill up again. And it gets worse after that...

its only advancement if you consider more of the same as an advancement, which i dont, so no i dont agree with you. Processors hit the wall first(where is my 10ghz netburst damnit! , P4 was the writing on the wall really), so now they just bolt more of the same on, which isn't advancement.
 

bronxzv

Senior member
Jun 13, 2011
460
0
71
My point is that by nature software advances at a slower rate than hardware.
AFAIK the number of new lines of code source isn't a well respected metric of "software advances", you are basically comparing two completely unrelated things
 

PaGe42

Junior Member
Jun 20, 2012
13
0
0
its only advancement if you consider more of the same as an advancement, which i dont, so no i dont agree with you. Processors hit the wall first(where is my 10ghz netburst damnit! , P4 was the writing on the wall really), so now they just bolt more of the same on, which isn't advancement.

It may not be the advancement you are after, but it is an advancement nonetheless. Better yet, this is exponential advancement whereas higher frequencies is just linear. And it's software task to take advantage of the hardware progress. And that takes (linear) time.
 

PaGe42

Junior Member
Jun 20, 2012
13
0
0
AFAIK the number of new lines of code source isn't a well respected metric of "software advances", you are basically comparing two completely unrelated things

If you see "software advances" as better algorithms, you are right. But when was the last time a new version of a software product was actually smaller than the previous one (in terms of lines of code)? Their advancement is mainly through implementing more algorithms. And, again, this is linear progress, which in the end will be outrun by exponential hardware advances.
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,079
3,916
136
It may not be the advancement you are after, but it is an advancement nonetheless. Better yet, this is exponential advancement whereas higher frequencies is just linear. And it's software task to take advantage of the hardware progress. And that takes (linear) time.


the limitation is the hardware inability to provide schemes to extract that performance. If peak performance is all you care about then CPU's are nothing compared to GPU's.


depending how well gather works in AVX 2 that might be the first real big performance improvement we have had in a while as performance should increase with minimal added software complexity (gather is meant to take away a lot of the hard stuff in vectorization of code).
 
Last edited:

bronxzv

Senior member
Jun 13, 2011
460
0
71
If you see "software advances" as better algorithms, you are right. But when was the last time a new version of a software product was actually smaller than the previous one (in terms of lines of code)?

as already explained, the number of lines of source code is *completely unrelated with the required CPU performance*, you can have a 10 millions lines office application happy with a low-end CPU, still happy when it will be at 20 millions lines and a 100 lines compute kernel requiring a 1000 nodes compute cluster to be useful

the key reason is the behavior of loops: the number of retired instructions is 100% orthogonal with the number of instructions in the program
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
234
106
Today? Absolutely.

I remember when Windows XP came out, the "affordable" hardware was so much behind. I was even constantly running out of disk space (the baloon 200mb warning was my best friend). Let alone, the lack of RAM and CPU power. Things have improved, drastically. Now, you can build a ~$300 computer that can run pretty much anything "general" with such an ease :p
 
Last edited:

ninaholic37

Golden Member
Apr 13, 2012
1,883
31
91
It may not be the advancement you are after, but it is an advancement nonetheless. Better yet, this is exponential advancement whereas higher frequencies is just linear. And it's software task to take advantage of the hardware progress. And that takes (linear) time.
I think calling it "exponential hardware advancement" is sort of misleading, because it doesn't always work out to be possible to gain much, depending on the application/case (so it'd be more accurate to say "variable advancement" or "0 to exponential advancement, minus overhead/extra hardware required"). If it were true "exponential" it would work out to be exponentially faster for every instruction (IPC + frequency to the power x faster). It can get pretty ugly when you try to "measure" it.
 

PaGe42

Junior Member
Jun 20, 2012
13
0
0
Going back to the original post:
It is 2012. Windows 7 and Windows 8 will both run quite well on hardware that is, by technological standards, ancient. A 2.66 GHz Pentium 4, Geforce FX5200, and 1.5 GB of RAM has no real trouble with the OS. This is very different than what would be expected ten years ago. Trying to run Windows XP on a mid-range Pentium MMX based PC would be a nightmare. ... Why, though? Why can hardware last so effectively now?

This is what I'm trying to answer. Windows has advanced since XP in a linear fashion, meaning the amount of code has at most doubled. Hardware has advanced exponentially since then, so it is 10 times or so more capable.

I'm not disagreeing with your arguments, just trying to make a different point.
 

PaGe42

Junior Member
Jun 20, 2012
13
0
0
I think calling it "exponential hardware advancement" is sort of misleading, because it doesn't always work out to be possible to gain much, depending on the application/case (so it'd be more accurate to say "variable advancement" or "0 to exponential advancement, minus overhead/extra hardware required"). If it were true "exponential" it would work out to be exponentially faster for every instruction (IPC + frequency to the power x faster). It can get pretty ugly when you try to "measure" it.

Exponential means powers of 2. Like going from 1, to 2, 4, 8, etc. Exponentially faster for every instruction and then also doubling the core count would make it more than exponential. I'm not claiming that.

Advancement from the 8088 till P4 has basically been exponential, both in terms of number of transistors and performance. And even now, going from 1 core, to 2, 4, 8 progress is exponential. And trading additional cores for GPU's, or vector instructions still continues the exponential trend. If software can keep up, is a different matter. And in fact I'm claiming it has difficulties...
 

bronxzv

Senior member
Jun 13, 2011
460
0
71
Exponential means powers of 2. Like going from 1, to 2, 4, 8, etc. Exponentially faster for every instruction and then also doubling the core count would make it more than exponential
both 2^t and 4^t are exponentials, just different growth rate
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
What's the most compute-limited consumer application?

video encoding? maybe, but think of something that we don't have hardware acceleration for.
 

happysmiles

Senior member
May 1, 2012
340
0
0
more efficient coding is good for everyone! the days of poorly optimized software are hopefully coming to an end.
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
My point is that by nature software advances at a slower rate than hardware. And that, for most cases, software is no longer restrained by hardware.
Software is very much restrained by hardware. Not by its theoretical performance, but by the difficulty in extracting that performance.

Just a few years back there has been a dramatic paradigm shift. Before, developers didn't have to do a single thing to make their software run faster on newer hardware. The Pentium 4 scaled all the way from 1.3 GHz to 3.8 GHz! Then it hit a power consumption wall, but fortunately they were able to switch to the Core 2 architecture, which achieved higher IPC and also still scaled from around 2 GHz to over 3 GHz. Developers still didn't have to do anything to benefit from this newer hardware. But then it all stagnated...

Multi-core dramatically increases the available computing power, but it's notoriously difficult to multi-thread software in a scalable way. It becomes quadratically harder to ensure that threads are interacting both correctly and efficiently. We need a breakthrough in technology to make it straightforward again for developers to take advantage of newer hardware. And Intel is stepping up to the plate by offering TSX in Haswell.

CPUs have also increased the theoretical performance by using SIMD vector instructions. But again up till now it has been notoriously difficult to take advantage of that, often requiring to write assembly code or at least have equivalent knowledge. So the average developer hasn't benefited much from it. The breakthrough here is AVX2, again to be introduced in Haswell. It enables developers to write regular scalar code, and automatically have it vectorized by the compiler. Previous SIMD instruction sets were not very suitable for auto-vectorization because they lacked gather support (parallel memory access), and certain vector equivalents of scalar instructions. Basically AVX2 enables to achieve high performance with low difficulty the same way a GPU works, only fully integrated into the CPU, thus allowing the use of legacy programming languages.

So next year we'll witness a revolution in hardware technology, and the software will soon follow.
 

PaGe42

Junior Member
Jun 20, 2012
13
0
0
Hardware has run away from software in recent years. I don't need a quad core processor to run my web browser. I don't need 16 GB of RAM to run my word processor.

Sure there are problems that require more processing power. But that is mostly a data problem. I don't need AVX2 or TSX to process a 1000 elements. But all the hardware in the world is not enough to simulate the universe.