Why are desktop CPUs so slow at improving?

Discussion in 'CPUs and Overclocking' started by generalako, Dec 5, 2017.

  1. shortylickens

    shortylickens No Lifer

    Joined:
    Jul 15, 2003
    Messages:
    66,747
    Likes Received:
    2,773
    Generally speaking, any time you want a major change in computers you usually need a minor breakthrough in physics or chemistry.

    I know.
    I used to work at Hynix.
     
  2. epsilon84

    epsilon84 Senior member

    Joined:
    Aug 29, 2010
    Messages:
    298
    Likes Received:
    148
    I don't want my PC to sound like a jet. Try cooling a 150W CPU in silence - it's not easy.

    Not to mention the other benefits of lower power consumption as stated by many other posters.
     
    William Gaatjes and whm1974 like this.
  3. whm1974

    whm1974 Platinum Member

    Joined:
    Jul 24, 2016
    Messages:
    2,206
    Likes Received:
    288
    This is why I rather stick with 65w CPUs if they can offer enough performance and cores/threads. I mean kiddos to AMD for releasing the Ryzen 7. I mean 8 cores/16 threads at 3000Mhz with a 65w TDP? Hell, last year if you wanted that, you have to pay an arm and leg and settle for 140w.
     
    William Gaatjes likes this.
  4. frozentundra123456

    frozentundra123456 Diamond Member

    Joined:
    Aug 11, 2008
    Messages:
    9,785
    Likes Received:
    420
    You have gotten plenty of intelligent and reasonable answers. Seems you only want "answers" that dovetail with your preconceived opinion.
     
    Burpo, SMU_Pony, Rifter and 1 other person like this.
  5. whm1974

    whm1974 Platinum Member

    Joined:
    Jul 24, 2016
    Messages:
    2,206
    Likes Received:
    288
    I agree. Seems like the OP is wedded to the idea of ARM desktop CPUs replacing x86. I find that to be highly unlikely as the x86 ISA is well entrenched.
     
  6. Thunder 57

    Thunder 57 Senior member

    Joined:
    Aug 19, 2007
    Messages:
    205
    Likes Received:
    65
    I don't think that's the only reason, though it is a very important one.

    OP seems to focus on how Sandy @ 32nm to Coffee @ 14nm hasn't shown as much as 28nm GPU's that went to 14/16nm. As has already been pointed out, graphics are very parallel so with a die shrink you can just add on more compute units and call it a day. CPU's are entirely different, being general purpose and also having to deal with a ton of dependencies.

    Also, remember that 28nm was a fairly crappy process. OK, so maybe it wasn't bad, but it was not ideal for high power/high performance. You cannot compare TSMC 28nm to Intel 32nm or GF 32nm SOI. Both performed better at the cost of density. Therefore, a GPU going from 28nm to 14/16nm gained a lot more than an AMD/Intel CPU at 32nm going to 14nm.

    ARM is not some magical solution, or else it would have taken over, as power usage is critical. Let's say you do have an A11 at 4W matching an Intel at 15W (which is dubious at best). Expand on that. Power obviously isn't going to scale at a 1:1 ratio, but if you believe those numbers, an A11 at 40W should be in the ballpark as an Intel at 150W. That is HUGE. There is no way companies would sit there and not take advantage of that.

    There is no magic ISA. If it were possible to make a RISC CPU at 35W that performed the same as an 8700k at 91W, it would have been done. To date ARM has focused on low power, low (relative) performance areas. I would love to see a company try to design a "balls to the wall" RISC CPU, but none exist that I know of.
     
    whm1974 likes this.
  7. whm1974

    whm1974 Platinum Member

    Joined:
    Jul 24, 2016
    Messages:
    2,206
    Likes Received:
    288
    Well I'm glad that we can now get 6 and 8 core CPUs with decent performance with a 65w TDP without costing an arm and leg. However it will be awhile before I upgrade to one of those.
     
  8. trparky

    trparky Junior Member

    Joined:
    Mar 2, 2008
    Messages:
    13
    Likes Received:
    0
    But low TDPs for the sake of low TDPs isn't worth it to me, especially not if you start eating into performance while you're doing so. Yes, it's great to have a cool running processor but lately it seems that the industry has been doing this as part of some kind of arms race to make thinner and thinner notebooks which I couldn't give a crap about.
     
  9. whm1974

    whm1974 Platinum Member

    Joined:
    Jul 24, 2016
    Messages:
    2,206
    Likes Received:
    288
    65w is a desktop CPU. And Yes I don't like super thin notebooks either.
     
  10. trparky

    trparky Junior Member

    Joined:
    Mar 2, 2008
    Messages:
    13
    Likes Received:
    0
    I wonder what we could do if we weren't constrained to a 65W TDP. Maybe perhaps 100W TDP?
     
  11. whm1974

    whm1974 Platinum Member

    Joined:
    Jul 24, 2016
    Messages:
    2,206
    Likes Received:
    288
    What is wrong with a 65w TDP? Why not try to squeeze more performance and clockspeed per watt?
     
  12. dullard

    dullard Elite Member

    Joined:
    May 21, 2001
    Messages:
    21,872
    Likes Received:
    474
    Lots of processors go above 65 W. But, processors have a very strong speed/power optimum. Go past that optimum and you don't gain very much speed for a much larger amount of power.

    Take Kaby Lake i7 chips for an example.
    • Going from 35 W (7700T) to 65 W (7700) TDP gains you 700 MHz base speed and 400 MHz quad core turbo speed.
    • Going from 65 W (7700) to 91 W (7700K) TDP gains you 600 MHz base speed and 400 MHz quad core turbo speed.
    • Going from 91 W (7700K) to 112 W (7740X) TDP gains you a measly 100 MHz base speed and 100 MHz quad core turbo speed.
    • Give the 7700K infinite power and you probably won't get much above ~600 MHz gain.
    There were big speed gains up to about the 91 W point at which more power starts to get you almost nothing. Yes, more power is faster, but you are well past the point of diminishing returns.


    Note: I blurred the lines between TDP and actual power used here, but the point is still correct.
     
  13. trparky

    trparky Junior Member

    Joined:
    Mar 2, 2008
    Messages:
    13
    Likes Received:
    0
    Because I'd like to have more GHz. There's still a lot of older programs (and games) out there that benefit from high clock speed.

    Two such games that I play (Starcraft 2 and Diablo 3) both require high clock speed to play decently because both game engines are un-optimized piles of crap. That's the chief reason why I didn't go with AMD Ryzen because the Ryzen chips are stuck in the sub-4 GHz basement and both games need clock speeds in excess of 4 GHz to play decently.

    Edit
    My system has a Core i5 3570k overclocked to 4.4 GHz.
     
  14. whm1974

    whm1974 Platinum Member

    Joined:
    Jul 24, 2016
    Messages:
    2,206
    Likes Received:
    288
    I haven't played those games, but aren't they really old games? You should be able to play both of them quite well on anything modern.
     
  15. trparky

    trparky Junior Member

    Joined:
    Mar 2, 2008
    Messages:
    13
    Likes Received:
    0
    Yeah, they're really old games but they were written like crap. They're not at all multi-core aware meaning they need high single-threaded performance to play decently. Unfortunately Ryzen suffers in this department and not only that but it seems that modern processors are sacrificing clock speed to get lower temps and power requirements.
     
  16. whm1974

    whm1974 Platinum Member

    Joined:
    Jul 24, 2016
    Messages:
    2,206
    Likes Received:
    288
    Well you should keep in mind that not everyone wants an extra space heater.
     
  17. trparky

    trparky Junior Member

    Joined:
    Mar 2, 2008
    Messages:
    13
    Likes Received:
    0
    I live in Ohio, I needs to keep warm in the winter.
     
  18. whm1974

    whm1974 Platinum Member

    Joined:
    Jul 24, 2016
    Messages:
    2,206
    Likes Received:
    288
    What about summer?
     
  19. LTC8K6

    LTC8K6 Lifer

    Joined:
    Mar 10, 2004
    Messages:
    25,756
    Likes Received:
    869
    The 8350K is just a relabeled 7700K with the HT disabled.
     
  20. ninaholic37

    ninaholic37 Golden Member

    Joined:
    Apr 13, 2012
    Messages:
    1,858
    Likes Received:
    20
    Sounds to me like this is the answer. Not much gains above certain TDP. I think this is why "10GHz by 2005" failed when they reached the frequency walls with Pentium 4, so then then went back more to Pentium 3 architecture and expanded with more cores and started to focus more on mobile and power-per-watt instead of more GHz. Seems early 2000s was the death of big gains and this might not be possible again until they switch from silicon to something else (maybe).
     
    whm1974 likes this.
  21. Rifter

    Rifter Lifer

    Joined:
    Oct 9, 1999
    Messages:
    10,494
    Likes Received:
    515
    I agree, untill we get a silicon replacement we are pretty much f'ed as far as large gains go.
     
  22. jpiniero

    jpiniero Diamond Member

    Joined:
    Oct 1, 2010
    Messages:
    4,559
    Likes Received:
    225
    Actually, Intel made some comments several years ago now that made it sound like when Intel does the real materials change clock speeds (and power consumption) are going to go down significantly. IOW slower single thread than something like the 8700K but at a fraction of the power consumption. That doesn't mean it will end up happening of course but given Intel's server first mentality now you have to think they will be prioritizing power draw more than ever.
     
  23. whm1974

    whm1974 Platinum Member

    Joined:
    Jul 24, 2016
    Messages:
    2,206
    Likes Received:
    288
    Yeah even a small room with a bunch of servers in it consumes a lot of power and cooling.
     
  24. TheELF

    TheELF Platinum Member

    Joined:
    Dec 22, 2012
    Messages:
    2,263
    Likes Received:
    125
    Why do you assume that?Just because that's the way the current material works?Better behavior is the whole point of a new material they could reach the same single thread sped with lower clocks/power.
     
  25. AMDisTheBEST

    AMDisTheBEST Senior member

    Joined:
    Dec 17, 2015
    Messages:
    583
    Likes Received:
    67
    Remember the jump between A8 to A9? Yeah.... 100% increase in single core. It totally blow Qualcomm and others out of the water. Much of it is due to clock speed bump thanks to the new manufacturing node however. I don’t know how much is the improvement in IPC.