• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Intel CPU Rant

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Gaming/Simulations have very little benefit for Intel's quest to just keeping adding more cores and lowering the frequency. Desktop consumers want to see more than 7% gain after 3+ years of CPU "progress" ... adding cores solves NOTHING for games/simulations.

Best stable OC for the 5960X is around 4.3Ghz, the 3960X is 4.8Ghz (under common cooling solution, not extreme ones) ... performance difference is only 7% ... so 3+ years on CPU progress is only producing a 7% gain? Most of that gain is probably related to the chipset X99 and not the CPU.

Consumers want higher frequency less cores, that's what works best for desktop computing, games, simulations. Because this doesn't fit your marketing strategy isn't a justification to NOT provide what consumers really want ... more die space, higher frequency.

I would much rather pay $1000 for a 4 core CPU operating at 6Ghz, than 8 CPUs operating at 4.3 Ghz.

What do you guys think?

CG

The laws of physics put an end to the GHz race about 10-12 years ago. Now the only way to improve performance is by improving efficiency and occasionally adding more cores (though we've been at 4 cores in the mid range far too long now). More competition might help, but we've pretty much reached the end of the road as far as clock-speed goes.
 
Unofficial Intel timeline: 7nm chips in 2020, 5nm in 2022? http://liliputing.com/2016/01/unofficial-intel-timeline-7nm-chips-in-2020-5nm-in-2022.html


In the meantime, it looks like here’s the unofficial rodmap for new Intel chips for the next few years (if
the rumors are correct):

2016 – 14nm “Kaby Lake” (Tock)
2017 – 10nm “Cannonlake” (Tick)
2018 – 10nm “Icelake” (Tock)
2019 – 10nm “Tigerlake” (Tock)
2020 – 7nm TBD (Tick)
2021 – 7nm TBD (Tock)
2022 – 5nm TBD (Tick)
Anyone have a clue of the power of these upcoming CPU's ? Which one will be the one to upgrade to?

Thank you,

CG
 
I love Intel's strategy of providing few substantial gains from each generation for high-end consumers, it keeps me from having to give Intel as much money.
 
There were also significant differences on a lower than microarchitectural level, P4 used more power consuming logic (does "domino logic" and "low voltage swing" ring any bells?) vs. Nehalem's cooler but slower static CMOS logic.

It's difficult to separate the microarchitectural concessions from the implementation ones when so much of Netburst's per-clock performance was sacrificed for clock speed, and it was such a more poorly balanced design..

Domino logic was used in the first Cortex-A8 Exynos as well, the one with the code-named Hummingbird implementation. But it didn't result in any massive performance difference vs the competition. I'm sure this and other design changes targeting performance at the expense of power consumption would boost the clock speed of a Skylake-like design, but probably not to 6GHz on air.
 
I think that instead of wanting higher speed processors we need to be looking at the software developers and expecting them to utilize more cores. There's a lot to be said about optimizing code for improvements. I know it's not a direct comparison but Apple manages to squeeze more performance from its older iDevices with new iOS updates. Pretty sure they aren't coding the processors to run faster to get these gains. It's all great to keep looking at single threaded workloads but dual cores+ have been the norm for how long now??? And now we have many baseline CPUs (i3) coming out with HT as well which is 4 logical cores that could be leveraged much better.
 
Problem is, somethings simply are not, and never will be, parallelizable. ST is always going to matter somewhere.

Personally, I would focus on getting a much of the latency out of the cache systems (and, on the oem side, getting the fastest I/O I could) as possible. Perceived speed often differs from on-paper specs. Lots of low-latency, high-associativity cache would help.
 
Problem is, somethings simply are not, and never will be, parallelizable. ST is always going to matter somewhere.

Personally, I would focus on getting a much of the latency out of the cache systems (and, on the oem side, getting the fastest I/O I could) as possible. Perceived speed often differs from on-paper specs. Lots of low-latency, high-associativity cache would help.

I'd agree with this, but there's also a human factor in efficient coding.

A good example of how latency can REALLY affect computation times is with matrix multiplication. Multiplying giant matrices together can potentially lead to billions of cache misses if done naively, which can lead to computation times of many seconds. An efficient algorithm might only generate a couple million cache misses and cut computation time down to a second or less.
 
I m sure there is more then a 7% gain after every year let alone every 3-5 years. We just need to give the software side of things a chance to catch up with the hardware side to make those gains more noticeable.Besides Ghz isnt everything there is lots and lots of other things that effect a pc performance in a cpu as well. IPC,Bus,Cache,Compression and Instruction sets just to name a few.Right now vendors are still trying to play catch up to take advantage of 5 year old instruction sets that are in cpu's but unused for what ever reason.
 
Intel's quest to just keeping adding more cores and lowering the frequency.

Wait... Intel is on a quest to keep adding cores? It's funny how all I see at the store for under $300 is a quad core. Same now as it was 9 years ago. Also, the stock frequency has slowly gone up, which is again exactly the opposite of what you said.
 
Wait... Intel is on a quest to keep adding cores? It's funny how all I see at the store for under $300 is a quad core.

Indeed, it's absolutely pathetic that they have 22 core server CPU's that turbo up to 3.6 and mainstream sits moldering at 4 core...
 
Indeed, it's absolutely pathetic that they have 22 core server CPU's that turbo up to 3.6 and mainstream sits moldering at 4 core...

Why? You can buy more cores if you wish. But you forget what the 99.x% crowd wants. Its no secret that this forum and reality is light years apart.
 
Indeed, it's absolutely pathetic that they have 22 core server CPU's that turbo up to 3.6 and mainstream sits moldering at 4 core...
Server CPUs are commercial, which means they are completely different in just about everything from consumer CPUs, you just can't compare that, it's something like asking why Ford Mondeo does not come with towing capacity of Ford F-450.
 
Yea, you can buy more cores if you want, on a currently 3 generation old architecture and on a separate platform. I adamantly believe though, that intel should introduce a hex core on the mainstream platform. In addition to offering more cores on the best process and architecture, it would also make a viable upgrade path without having to buy a new, expensive motherboard. I also believe that a company that basically gives the finger to their most affluent and knowledgeable customers by catering obsessively to the "99% crowd" will ultimately pay a price for it.
 
They've done the math. It would cost too much to sell any quantity to consumers. That means they'd lose money.

No business will purposely set out to lose money.
 
Back
Top