Are CPUs advancing faster than software requires?

Discussion in 'CPUs and Overclocking' started by pantsaregood, Jun 17, 2012.

  1. bronxzv

    bronxzv Senior member

    Joined:
    Jun 13, 2011
    Messages:
    494
    Likes Received:
    0
    the number of lines of code isn't involved here, you can change a single constant in your code (for example the number of iterations of a critical loop) and make your application trillions of times more CPU demanding
     
  2. beginner99

    beginner99 Diamond Member

    Joined:
    Jun 2, 2009
    Messages:
    3,443
    Likes Received:
    207
    Agree. Had an off-the-shelf cheap Vista PC for about 1 Year. the difference between Vista and 7 is marginal especially in terms of UI. Also said PC had low specs but it worked fine for daily usage. Never understood why vista= ultra bad and 7=super good as they are so similar.
     
  3. PaGe42

    PaGe42 Junior Member

    Joined:
    Jun 20, 2012
    Messages:
    13
    Likes Received:
    0
    And you won't sell a single copy. Of course you can bring down any CPU with special software, but that is not my point. My point is that by nature software advances at a slower rate than hardware. And that, for most cases, software is no longer restrained by hardware.
     
  4. itsmydamnation

    itsmydamnation Golden Member

    Joined:
    Feb 6, 2011
    Messages:
    1,565
    Likes Received:
    504
    if CPU''s are advancing so fast, why are we needing to go to wider and wider vectors to increase throughput ( adding complexity). CPU performance has plateaued and increased software complexity, thats why your hardware isn't being pushed, because its hard to scale to 256bits worth of 32bit data, its hard to Utilize high core counts.
     
    #54 itsmydamnation, Jun 20, 2012
    Last edited: Jun 20, 2012
  5. PaGe42

    PaGe42 Junior Member

    Joined:
    Jun 20, 2012
    Messages:
    13
    Likes Received:
    0
    So essentially you agree: software is not keeping up with advancing hardware.

    Yes it is hard to scale the software. If it is has taken you ten years to fill current hardware and the hardware capacity then doubles two years later, you would have only those two years to add the same amount of software to fill up again. And it gets worse after that...
     
  6. itsmydamnation

    itsmydamnation Golden Member

    Joined:
    Feb 6, 2011
    Messages:
    1,565
    Likes Received:
    504
    its only advancement if you consider more of the same as an advancement, which i dont, so no i dont agree with you. Processors hit the wall first(where is my 10ghz netburst damnit! , P4 was the writing on the wall really), so now they just bolt more of the same on, which isn't advancement.
     
  7. bronxzv

    bronxzv Senior member

    Joined:
    Jun 13, 2011
    Messages:
    494
    Likes Received:
    0
    AFAIK the number of new lines of code source isn't a well respected metric of "software advances", you are basically comparing two completely unrelated things
     
  8. PaGe42

    PaGe42 Junior Member

    Joined:
    Jun 20, 2012
    Messages:
    13
    Likes Received:
    0
    It may not be the advancement you are after, but it is an advancement nonetheless. Better yet, this is exponential advancement whereas higher frequencies is just linear. And it's software task to take advantage of the hardware progress. And that takes (linear) time.
     
  9. PaGe42

    PaGe42 Junior Member

    Joined:
    Jun 20, 2012
    Messages:
    13
    Likes Received:
    0
    If you see "software advances" as better algorithms, you are right. But when was the last time a new version of a software product was actually smaller than the previous one (in terms of lines of code)? Their advancement is mainly through implementing more algorithms. And, again, this is linear progress, which in the end will be outrun by exponential hardware advances.
     
  10. itsmydamnation

    itsmydamnation Golden Member

    Joined:
    Feb 6, 2011
    Messages:
    1,565
    Likes Received:
    504

    the limitation is the hardware inability to provide schemes to extract that performance. If peak performance is all you care about then CPU's are nothing compared to GPU's.


    depending how well gather works in AVX 2 that might be the first real big performance improvement we have had in a while as performance should increase with minimal added software complexity (gather is meant to take away a lot of the hard stuff in vectorization of code).
     
    #60 itsmydamnation, Jun 20, 2012
    Last edited: Jun 20, 2012
  11. bronxzv

    bronxzv Senior member

    Joined:
    Jun 13, 2011
    Messages:
    494
    Likes Received:
    0
    as already explained, the number of lines of source code is *completely unrelated with the required CPU performance*, you can have a 10 millions lines office application happy with a low-end CPU, still happy when it will be at 20 millions lines and a 100 lines compute kernel requiring a 1000 nodes compute cluster to be useful

    the key reason is the behavior of loops: the number of retired instructions is 100% orthogonal with the number of instructions in the program
     
  12. Magic Carpet

    Magic Carpet Diamond Member

    Joined:
    Oct 2, 2011
    Messages:
    3,107
    Likes Received:
    5
    Today? Absolutely.

    I remember when Windows XP came out, the "affordable" hardware was so much behind. I was even constantly running out of disk space (the baloon 200mb warning was my best friend). Let alone, the lack of RAM and CPU power. Things have improved, drastically. Now, you can build a ~$300 computer that can run pretty much anything "general" with such an ease :p
     
    #62 Magic Carpet, Jun 20, 2012
    Last edited: Jun 20, 2012
  13. ninaholic37

    ninaholic37 Golden Member

    Joined:
    Apr 13, 2012
    Messages:
    1,842
    Likes Received:
    19
    I think calling it "exponential hardware advancement" is sort of misleading, because it doesn't always work out to be possible to gain much, depending on the application/case (so it'd be more accurate to say "variable advancement" or "0 to exponential advancement, minus overhead/extra hardware required"). If it were true "exponential" it would work out to be exponentially faster for every instruction (IPC + frequency to the power x faster). It can get pretty ugly when you try to "measure" it.
     
  14. PaGe42

    PaGe42 Junior Member

    Joined:
    Jun 20, 2012
    Messages:
    13
    Likes Received:
    0
    Going back to the original post:
    This is what I'm trying to answer. Windows has advanced since XP in a linear fashion, meaning the amount of code has at most doubled. Hardware has advanced exponentially since then, so it is 10 times or so more capable.

    I'm not disagreeing with your arguments, just trying to make a different point.
     
  15. PaGe42

    PaGe42 Junior Member

    Joined:
    Jun 20, 2012
    Messages:
    13
    Likes Received:
    0
    Exponential means powers of 2. Like going from 1, to 2, 4, 8, etc. Exponentially faster for every instruction and then also doubling the core count would make it more than exponential. I'm not claiming that.

    Advancement from the 8088 till P4 has basically been exponential, both in terms of number of transistors and performance. And even now, going from 1 core, to 2, 4, 8 progress is exponential. And trading additional cores for GPU's, or vector instructions still continues the exponential trend. If software can keep up, is a different matter. And in fact I'm claiming it has difficulties...
     
  16. itsmydamnation

    itsmydamnation Golden Member

    Joined:
    Feb 6, 2011
    Messages:
    1,565
    Likes Received:
    504
    where is my exponential perf increase from Core2 quad to now?
     
  17. bronxzv

    bronxzv Senior member

    Joined:
    Jun 13, 2011
    Messages:
    494
    Likes Received:
    0
    both 2^t and 4^t are exponentials, just different growth rate
     
  18. PaGe42

    PaGe42 Junior Member

    Joined:
    Jun 20, 2012
    Messages:
    13
    Likes Received:
    0
    Yes, I know. I was thinking that 2^2^t was different from 4^t. My mistake.
     
  19. PaGe42

    PaGe42 Junior Member

    Joined:
    Jun 20, 2012
    Messages:
    13
    Likes Received:
    0
    If you include the GPU cores from an Ivy Bridge processor, you're pretty much there.
     
  20. alyarb

    alyarb Platinum Member

    Joined:
    Jan 25, 2009
    Messages:
    2,445
    Likes Received:
    0
    What's the most compute-limited consumer application?

    video encoding? maybe, but think of something that we don't have hardware acceleration for.
     
  21. itsmydamnation

    itsmydamnation Golden Member

    Joined:
    Feb 6, 2011
    Messages:
    1,565
    Likes Received:
    504
    now you really are clutching at straws, what about 64bit binaries?
     
  22. happysmiles

    happysmiles Senior member

    Joined:
    May 1, 2012
    Messages:
    344
    Likes Received:
    0
    more efficient coding is good for everyone! the days of poorly optimized software are hopefully coming to an end.
     
  23. BenchPress

    BenchPress Senior member

    Joined:
    Nov 8, 2011
    Messages:
    392
    Likes Received:
    0
    Software is very much restrained by hardware. Not by its theoretical performance, but by the difficulty in extracting that performance.

    Just a few years back there has been a dramatic paradigm shift. Before, developers didn't have to do a single thing to make their software run faster on newer hardware. The Pentium 4 scaled all the way from 1.3 GHz to 3.8 GHz! Then it hit a power consumption wall, but fortunately they were able to switch to the Core 2 architecture, which achieved higher IPC and also still scaled from around 2 GHz to over 3 GHz. Developers still didn't have to do anything to benefit from this newer hardware. But then it all stagnated...

    Multi-core dramatically increases the available computing power, but it's notoriously difficult to multi-thread software in a scalable way. It becomes quadratically harder to ensure that threads are interacting both correctly and efficiently. We need a breakthrough in technology to make it straightforward again for developers to take advantage of newer hardware. And Intel is stepping up to the plate by offering TSX in Haswell.

    CPUs have also increased the theoretical performance by using SIMD vector instructions. But again up till now it has been notoriously difficult to take advantage of that, often requiring to write assembly code or at least have equivalent knowledge. So the average developer hasn't benefited much from it. The breakthrough here is AVX2, again to be introduced in Haswell. It enables developers to write regular scalar code, and automatically have it vectorized by the compiler. Previous SIMD instruction sets were not very suitable for auto-vectorization because they lacked gather support (parallel memory access), and certain vector equivalents of scalar instructions. Basically AVX2 enables to achieve high performance with low difficulty the same way a GPU works, only fully integrated into the CPU, thus allowing the use of legacy programming languages.

    So next year we'll witness a revolution in hardware technology, and the software will soon follow.
     
  24. PaGe42

    PaGe42 Junior Member

    Joined:
    Jun 20, 2012
    Messages:
    13
    Likes Received:
    0
    Hardware has run away from software in recent years. I don't need a quad core processor to run my web browser. I don't need 16 GB of RAM to run my word processor.

    Sure there are problems that require more processing power. But that is mostly a data problem. I don't need AVX2 or TSX to process a 1000 elements. But all the hardware in the world is not enough to simulate the universe.
     
  25. pantsaregood

    pantsaregood Senior member

    Joined:
    Feb 13, 2011
    Messages:
    984
    Likes Received:
    34
    as long as flash player exists, this is wrong