Go Back   AnandTech Forums > Hardware and Technology > CPUs and Overclocking

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Home and Garden
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals with Free Stuff/Contests
· Black Friday 2014
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 06-20-2012, 06:56 AM   #51
bronxzv
Senior Member
 
Join Date: Jun 2011
Posts: 406
Default

Quote:
Originally Posted by PaGe42 View Post
CPU's are advancing on an exponential scale (a doubling every two years), whereas software development is linear (adding x million lines of code every year).
the number of lines of code isn't involved here, you can change a single constant in your code (for example the number of iterations of a critical loop) and make your application trillions of times more CPU demanding
bronxzv is offline   Reply With Quote
Old 06-20-2012, 06:58 AM   #52
beginner99
Platinum Member
 
Join Date: Jun 2009
Posts: 2,219
Default

Quote:
Originally Posted by pantsaregood View Post
No, it isn't. Windows 7 is more like a service pack than a new OS iteration. Explorer got some functionality upgrades, but the kernel is hardly different. The requirements for Vista, 7, and 8 are the same because the kernel hasn't undergone a major change.

Windows 2000 to XP was a more significant change than Vista to 7.

As for your Vista machine that magically became fast: Go put Vista SP2 with the feature pack on it. Use updated drivers. Observe that it runs just as well as 7. Bench it if you must.
Agree. Had an off-the-shelf cheap Vista PC for about 1 Year. the difference between Vista and 7 is marginal especially in terms of UI. Also said PC had low specs but it worked fine for daily usage. Never understood why vista= ultra bad and 7=super good as they are so similar.
beginner99 is offline   Reply With Quote
Old 06-20-2012, 07:13 AM   #53
PaGe42
Junior Member
 
Join Date: Jun 2012
Posts: 13
Default

Quote:
Originally Posted by bronxzv View Post
the number of lines of code isn't involved here, you can change a single constant in your code (for example the number of iterations of a critical loop) and make your application trillions of times more CPU demanding
And you won't sell a single copy. Of course you can bring down any CPU with special software, but that is not my point. My point is that by nature software advances at a slower rate than hardware. And that, for most cases, software is no longer restrained by hardware.
PaGe42 is offline   Reply With Quote
Old 06-20-2012, 07:44 AM   #54
itsmydamnation
Senior Member
 
Join Date: Feb 2011
Posts: 617
Default

Quote:
Originally Posted by PaGe42 View Post
And you won't sell a single copy. Of course you can bring down any CPU with special software, but that is not my point. My point is that by nature software advances at a slower rate than hardware. And that, for most cases, software is no longer restrained by hardware.
if CPU''s are advancing so fast, why are we needing to go to wider and wider vectors to increase throughput ( adding complexity). CPU performance has plateaued and increased software complexity, thats why your hardware isn't being pushed, because its hard to scale to 256bits worth of 32bit data, its hard to Utilize high core counts.

Last edited by itsmydamnation; 06-20-2012 at 07:46 AM.
itsmydamnation is offline   Reply With Quote
Old 06-20-2012, 07:57 AM   #55
PaGe42
Junior Member
 
Join Date: Jun 2012
Posts: 13
Default

Quote:
Originally Posted by itsmydamnation View Post
if CPU''s are advancing so fast, why are we needing to go to wider and wider vectors to increase throughput ( adding complexity). CPU performance has plateaued and increased software complexity, thats why your hardware isn't being pushed, because its hard to scale to 256bits worth of 32bit data, its hard to Utilize high core counts.
So essentially you agree: software is not keeping up with advancing hardware.

Yes it is hard to scale the software. If it is has taken you ten years to fill current hardware and the hardware capacity then doubles two years later, you would have only those two years to add the same amount of software to fill up again. And it gets worse after that...
PaGe42 is offline   Reply With Quote
Old 06-20-2012, 08:08 AM   #56
itsmydamnation
Senior Member
 
Join Date: Feb 2011
Posts: 617
Default

Quote:
Originally Posted by PaGe42 View Post
So essentially you agree: software is not keeping up with advancing hardware.

Yes it is hard to scale the software. If it is has taken you ten years to fill current hardware and the hardware capacity then doubles two years later, you would have only those two years to add the same amount of software to fill up again. And it gets worse after that...
its only advancement if you consider more of the same as an advancement, which i dont, so no i dont agree with you. Processors hit the wall first(where is my 10ghz netburst damnit! , P4 was the writing on the wall really), so now they just bolt more of the same on, which isn't advancement.
itsmydamnation is offline   Reply With Quote
Old 06-20-2012, 08:17 AM   #57
bronxzv
Senior Member
 
Join Date: Jun 2011
Posts: 406
Default

Quote:
Originally Posted by PaGe42 View Post
My point is that by nature software advances at a slower rate than hardware.
AFAIK the number of new lines of code source isn't a well respected metric of "software advances", you are basically comparing two completely unrelated things
bronxzv is offline   Reply With Quote
Old 06-20-2012, 08:25 AM   #58
PaGe42
Junior Member
 
Join Date: Jun 2012
Posts: 13
Default

Quote:
Originally Posted by itsmydamnation View Post
its only advancement if you consider more of the same as an advancement, which i dont, so no i dont agree with you. Processors hit the wall first(where is my 10ghz netburst damnit! , P4 was the writing on the wall really), so now they just bolt more of the same on, which isn't advancement.
It may not be the advancement you are after, but it is an advancement nonetheless. Better yet, this is exponential advancement whereas higher frequencies is just linear. And it's software task to take advantage of the hardware progress. And that takes (linear) time.
PaGe42 is offline   Reply With Quote
Old 06-20-2012, 08:29 AM   #59
PaGe42
Junior Member
 
Join Date: Jun 2012
Posts: 13
Default

Quote:
Originally Posted by bronxzv View Post
AFAIK the number of new lines of code source isn't a well respected metric of "software advances", you are basically comparing two completely unrelated things
If you see "software advances" as better algorithms, you are right. But when was the last time a new version of a software product was actually smaller than the previous one (in terms of lines of code)? Their advancement is mainly through implementing more algorithms. And, again, this is linear progress, which in the end will be outrun by exponential hardware advances.
PaGe42 is offline   Reply With Quote
Old 06-20-2012, 08:49 AM   #60
itsmydamnation
Senior Member
 
Join Date: Feb 2011
Posts: 617
Default

Quote:
Originally Posted by PaGe42 View Post
It may not be the advancement you are after, but it is an advancement nonetheless. Better yet, this is exponential advancement whereas higher frequencies is just linear. And it's software task to take advantage of the hardware progress. And that takes (linear) time.

the limitation is the hardware inability to provide schemes to extract that performance. If peak performance is all you care about then CPU's are nothing compared to GPU's.


depending how well gather works in AVX 2 that might be the first real big performance improvement we have had in a while as performance should increase with minimal added software complexity (gather is meant to take away a lot of the hard stuff in vectorization of code).

Last edited by itsmydamnation; 06-20-2012 at 08:51 AM.
itsmydamnation is offline   Reply With Quote
Old 06-20-2012, 08:49 AM   #61
bronxzv
Senior Member
 
Join Date: Jun 2011
Posts: 406
Default

Quote:
Originally Posted by PaGe42 View Post
If you see "software advances" as better algorithms, you are right. But when was the last time a new version of a software product was actually smaller than the previous one (in terms of lines of code)?
as already explained, the number of lines of source code is *completely unrelated with the required CPU performance*, you can have a 10 millions lines office application happy with a low-end CPU, still happy when it will be at 20 millions lines and a 100 lines compute kernel requiring a 1000 nodes compute cluster to be useful

the key reason is the behavior of loops: the number of retired instructions is 100% orthogonal with the number of instructions in the program
bronxzv is offline   Reply With Quote
Old 06-20-2012, 09:24 AM   #62
Magic Carpet
Golden Member
 
Magic Carpet's Avatar
 
Join Date: Oct 2011
Posts: 1,836
Default

Today? Absolutely.

I remember when Windows XP came out, the "affordable" hardware was so much behind. I was even constantly running out of disk space (the baloon 200mb warning was my best friend). Let alone, the lack of RAM and CPU power. Things have improved, drastically. Now, you can build a ~$300 computer that can run pretty much anything "general" with such an ease

Last edited by Magic Carpet; 06-20-2012 at 09:26 AM.
Magic Carpet is offline   Reply With Quote
Old 06-20-2012, 09:25 AM   #63
ninaholic37
Senior Member
 
ninaholic37's Avatar
 
Join Date: Apr 2012
Posts: 643
Default

Quote:
Originally Posted by PaGe42 View Post
It may not be the advancement you are after, but it is an advancement nonetheless. Better yet, this is exponential advancement whereas higher frequencies is just linear. And it's software task to take advantage of the hardware progress. And that takes (linear) time.
I think calling it "exponential hardware advancement" is sort of misleading, because it doesn't always work out to be possible to gain much, depending on the application/case (so it'd be more accurate to say "variable advancement" or "0 to exponential advancement, minus overhead/extra hardware required"). If it were true "exponential" it would work out to be exponentially faster for every instruction (IPC + frequency to the power x faster). It can get pretty ugly when you try to "measure" it.
ninaholic37 is offline   Reply With Quote
Old 06-20-2012, 09:26 AM   #64
PaGe42
Junior Member
 
Join Date: Jun 2012
Posts: 13
Default

Going back to the original post:
Quote:
Originally Posted by pantsaregood View Post
It is 2012. Windows 7 and Windows 8 will both run quite well on hardware that is, by technological standards, ancient. A 2.66 GHz Pentium 4, Geforce FX5200, and 1.5 GB of RAM has no real trouble with the OS. This is very different than what would be expected ten years ago. Trying to run Windows XP on a mid-range Pentium MMX based PC would be a nightmare. ... Why, though? Why can hardware last so effectively now?
This is what I'm trying to answer. Windows has advanced since XP in a linear fashion, meaning the amount of code has at most doubled. Hardware has advanced exponentially since then, so it is 10 times or so more capable.

I'm not disagreeing with your arguments, just trying to make a different point.
PaGe42 is offline   Reply With Quote
Old 06-20-2012, 09:40 AM   #65
PaGe42
Junior Member
 
Join Date: Jun 2012
Posts: 13
Default

Quote:
Originally Posted by ninaholic37 View Post
I think calling it "exponential hardware advancement" is sort of misleading, because it doesn't always work out to be possible to gain much, depending on the application/case (so it'd be more accurate to say "variable advancement" or "0 to exponential advancement, minus overhead/extra hardware required"). If it were true "exponential" it would work out to be exponentially faster for every instruction (IPC + frequency to the power x faster). It can get pretty ugly when you try to "measure" it.
Exponential means powers of 2. Like going from 1, to 2, 4, 8, etc. Exponentially faster for every instruction and then also doubling the core count would make it more than exponential. I'm not claiming that.

Advancement from the 8088 till P4 has basically been exponential, both in terms of number of transistors and performance. And even now, going from 1 core, to 2, 4, 8 progress is exponential. And trading additional cores for GPU's, or vector instructions still continues the exponential trend. If software can keep up, is a different matter. And in fact I'm claiming it has difficulties...
PaGe42 is offline   Reply With Quote
Old 06-20-2012, 09:44 AM   #66
itsmydamnation
Senior Member
 
Join Date: Feb 2011
Posts: 617
Default

where is my exponential perf increase from Core2 quad to now?
itsmydamnation is offline   Reply With Quote
Old 06-20-2012, 09:48 AM   #67
bronxzv
Senior Member
 
Join Date: Jun 2011
Posts: 406
Default

Quote:
Originally Posted by PaGe42 View Post
Exponential means powers of 2. Like going from 1, to 2, 4, 8, etc. Exponentially faster for every instruction and then also doubling the core count would make it more than exponential
both 2^t and 4^t are exponentials, just different growth rate
bronxzv is offline   Reply With Quote
Old 06-20-2012, 09:58 AM   #68
PaGe42
Junior Member
 
Join Date: Jun 2012
Posts: 13
Default

Quote:
Originally Posted by bronxzv View Post
both 2^t and 4^t are exponentials, just different growth rate
Yes, I know. I was thinking that 2^2^t was different from 4^t. My mistake.
PaGe42 is offline   Reply With Quote
Old 06-20-2012, 10:00 AM   #69
PaGe42
Junior Member
 
Join Date: Jun 2012
Posts: 13
Default

Quote:
Originally Posted by itsmydamnation View Post
where is my exponential perf increase from Core2 quad to now?
If you include the GPU cores from an Ivy Bridge processor, you're pretty much there.
PaGe42 is offline   Reply With Quote
Old 06-20-2012, 10:17 AM   #70
alyarb
Platinum Member
 
Join Date: Jan 2009
Posts: 2,409
Default

What's the most compute-limited consumer application?

video encoding? maybe, but think of something that we don't have hardware acceleration for.
alyarb is offline   Reply With Quote
Old 06-20-2012, 10:36 AM   #71
itsmydamnation
Senior Member
 
Join Date: Feb 2011
Posts: 617
Default

Quote:
Originally Posted by PaGe42 View Post
If you include the GPU cores from an Ivy Bridge processor, you're pretty much there.
now you really are clutching at straws, what about 64bit binaries?
itsmydamnation is offline   Reply With Quote
Old 06-20-2012, 11:20 AM   #72
happysmiles
Senior Member
 
happysmiles's Avatar
 
Join Date: May 2012
Posts: 344
Default

more efficient coding is good for everyone! the days of poorly optimized software are hopefully coming to an end.
happysmiles is offline   Reply With Quote
Old 06-20-2012, 11:44 AM   #73
BenchPress
Senior Member
 
Join Date: Nov 2011
Posts: 392
Default

Quote:
Originally Posted by PaGe42 View Post
My point is that by nature software advances at a slower rate than hardware. And that, for most cases, software is no longer restrained by hardware.
Software is very much restrained by hardware. Not by its theoretical performance, but by the difficulty in extracting that performance.

Just a few years back there has been a dramatic paradigm shift. Before, developers didn't have to do a single thing to make their software run faster on newer hardware. The Pentium 4 scaled all the way from 1.3 GHz to 3.8 GHz! Then it hit a power consumption wall, but fortunately they were able to switch to the Core 2 architecture, which achieved higher IPC and also still scaled from around 2 GHz to over 3 GHz. Developers still didn't have to do anything to benefit from this newer hardware. But then it all stagnated...

Multi-core dramatically increases the available computing power, but it's notoriously difficult to multi-thread software in a scalable way. It becomes quadratically harder to ensure that threads are interacting both correctly and efficiently. We need a breakthrough in technology to make it straightforward again for developers to take advantage of newer hardware. And Intel is stepping up to the plate by offering TSX in Haswell.

CPUs have also increased the theoretical performance by using SIMD vector instructions. But again up till now it has been notoriously difficult to take advantage of that, often requiring to write assembly code or at least have equivalent knowledge. So the average developer hasn't benefited much from it. The breakthrough here is AVX2, again to be introduced in Haswell. It enables developers to write regular scalar code, and automatically have it vectorized by the compiler. Previous SIMD instruction sets were not very suitable for auto-vectorization because they lacked gather support (parallel memory access), and certain vector equivalents of scalar instructions. Basically AVX2 enables to achieve high performance with low difficulty the same way a GPU works, only fully integrated into the CPU, thus allowing the use of legacy programming languages.

So next year we'll witness a revolution in hardware technology, and the software will soon follow.
BenchPress is offline   Reply With Quote
Old 06-20-2012, 01:30 PM   #74
PaGe42
Junior Member
 
Join Date: Jun 2012
Posts: 13
Default

Hardware has run away from software in recent years. I don't need a quad core processor to run my web browser. I don't need 16 GB of RAM to run my word processor.

Sure there are problems that require more processing power. But that is mostly a data problem. I don't need AVX2 or TSX to process a 1000 elements. But all the hardware in the world is not enough to simulate the universe.
PaGe42 is offline   Reply With Quote
Old 06-20-2012, 02:27 PM   #75
pantsaregood
Senior Member
 
Join Date: Feb 2011
Posts: 900
Default

Quote:
Originally Posted by happysmiles View Post
more efficient coding is good for everyone! the days of poorly optimized software are hopefully coming to an end.
as long as flash player exists, this is wrong
pantsaregood is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 03:50 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.