Is mainstream desktop CPU development "completed"?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
There is a reason why I wrote high latency vector processors and not iGPU.
What I described would not work on any existing CPU. And it seems that no one is even trying to go this way - to hide WHAT executes instruction from "common supported pool". Instead we see explicit signalling for co-processor up to higher software layers (e.g. HSA)

Wide vector instructions are mostly suitable for application on streams or matrices where "intervention" from scalar ALU is sparse.
128bit vectors? There are "notoriously" known algorithms with 4 float vectors for anything that has to do with space. That is why AMD's flex-fpu looked quite interesting.

I'm secretly hoping that they can one day, come up with a processing block, which contains a huge number of relatively simple processor cores. A bit like Intels Knights Landing. But for/on mainstream computers.

Then it can be used as a powerful aid to the cpu, gpu and perform its own high speed calculations.

The programming of such a device, would probably be a mini-nightmare. But I would hope that powerful libraries, could be made, which would create useful functionality. So that programmers, can much more easily use it.

There is already such a cpu/chip on the market (not made by Intel). Epiphany-IV 64-core 28nm Microprocessor (E64G401), but it originally came with 16 or 64 small cores, in addition to a couple of arm processors. The 16 core + 2 Arms, is about $99, for the complete board, already available.
Sadly the 64 core + 2 Arm version, is no longer being made, or available. It was a Kickstarter project, initially. But can now be bought (18 core version, only) in the usual way.

The big hitch with it really, is that developing/creating software for it, is probably very difficult.

When/if many Arm core cpus, are widely available, they will probably make more interesting products. The Kickstarter cpus are too simple for serious work, in my opinion.
But if they had got to 256, 1024, 4096 cores etc, at a low cost, available to buy, it would have been very tempting. Especially if useful software was starting to appear for it.

Can many core cpus be usefully used in the future ?
I'm not sure. But I very much hope/think, they can and will be successful.

Anyone who thinks that cpus will be fine, with being quads, should think about other upcoming inventions, such as robots.

I challenge anyone to make a robot, which somewhat intelligently does things, including recognizing things it "sees". Moves about/walks/rides a bicycle. Etc. WITHOUT significant cpu abilities. I.e. WITHOUT using a human operator. Fully autonomous.

Anyone who thinks, we are a very, very long way from doing this, needs to view

this video of one (not fully automatic, as human operator is doing stuff), doing just that.



18 core Epiphany

parallella-board-22-609x400.jpg
 
Last edited:

WhoBeDaPlaya

Diamond Member
Sep 15, 2000
7,415
404
126
My 4770K @ 4.6GHz ~= i7-920 @ 5.7GHz
Newer platform and features are nice, but it's a snorefest performance-wise.
 

Shehriazad

Senior member
Nov 3, 2014
555
2
46
Yeh...I kind of feel like those +5% or whatever you get from every revision that Intel releases is actually the only improvement that is "feasible/sensible" right now.

They could probably do a lot better...if they were to ignore the cost. And currently there just isn't the huge need for the standard Desktop users to go much beyond the Intel 47XXK CPUs.

A lot of actual professionals in many fields will look for metrics other than IPC/Clock, anyway...and for gamers? The current gen CPUs have so much juice that will only improve once DX12/Vulkan etc is unleashed to the general public.



Intel COULD (and AMD for that matter, but they're behind, so I leave them out on purpose) probably push for a lot more performance and sell some magic 5 Ghz 8 core consumer Desktop chips! Of course they could....it just doesn't seem sensible.



I'm sure at some point INTEL/AMD/WHOEVER will have some nice breakthrough for nodes or materials that will open up an easy path that makes sense on a cost/yield basis...and then we will see some fast improvements. For ram this answer seems to be memory cubes and HBM to finally phase out DDR...for CPUs? Meh, we gotta wait...the market really isn't all that much about Desktops right now.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Its really hard to justify any core count above 4 on a desktop PC unless you do something silly all day and belong to a niche.

Why not go a step further and get rid of those quad cores. Bring mainstream desktop down to two cores?

Pentium = 2C/2T
Core i3 = 2C/4T
Core i5 = 2C/6T
Core i7 = 2C/8T

The cores would be wider, of course, but I think we are at the point in the frequency voltage curve where wider, but lower frequency cores would still be more efficient than going any faster with the current width cores. (especially when the additional threads per core are factored in)

You are not getting anything at all. Hence why we got turbo modes. A quadcore with 2 threads runs as fas as a dualcore could.

I'm thinking the dual core with 4-way hyperthreading would have a smaller die size than 4C/8T quad core.

Also maximum single thread performance should be higher on a wider core than a narrower core (even with extra turbo on the narrower core factored in).
 

creativedotit

Junior Member
Mar 28, 2015
3
0
0
the new macbook is using an Intel M ultra-low-voltage processor

https://www.apple.com/macbook/specs/

It doesn't needs fans to cool down. Like a smartphone.

For now this means also low frequency (2cores x 1.2GHz with turbo at 2.0GHz with just 4.5 W TDP)
but as it happens with DDR RAM, optimizing chips to support frequency increase is what happens after a new architecture is launched.

So I think they are investing in bringing this architecture to next performance level.

I imagine a future where 3-4GHz (and even 5GHz) will be brought to those ultra-low-voltage chips, and where they can easily be used in multi-core architectures and multi-socket motherboards.

Imagine those very small but powerfull stuff, like a RAM, plugged in a future motherboard with ease, maybe a 4-8 socket mobo with slots for mobile cpus @4GHz with 16 core each, and no need for mechanic cooling systems: all will be very small, and if you consider that also DDR4 and SSD density and performance will improve a lot, you will really be able to hold a current datacenter on an hand.

This is the future I dream and I think they are researching for this, because increasing the 5GHz limit or the core number on current desktop cpus will mean too energy/TDP inefficiency.
 
Last edited:

Shehriazad

Senior member
Nov 3, 2014
555
2
46
Well yea I guess I ignored that part of Desktop.


While performance is only creeping...The size and voltage of CPU is still VERY much being worked on.

The future is SoC (or so I'd like to believe) and in due time (well...years...k?) we might still have processors that are only somewhat faster than our current CPUs...but manage to have this speed at an absurdly low TDP while having the entire system-on-a-chip (hah).


So yea...in terms of performance unless there is some major breakthrough...that's pretty much it for some years except for the regular 5-10% IPC improvements.


I would really not mind having the current Intels' high end CPU and AMD/Nvidia GPU performance on a 10W passively cooled SoC that I can carry around and just hook up to a Monitor/Input device and do all my Desktop stuff with it. I guess that would actually be the death of the classic Desktop...but before that actually REALLY starts to phase out...we are probably going to have another decade of Desktop masterrace people with their huge PCs with 5 meter long GPUs.




But then again...do we really NEED more single core CPU power? I really do not think so. We need to get more efficient at using more than 1 core. We finally need to get to the point where 4C/8T CPUs can actually be utilized by any kind of high performance app. Not even games have managed this so far on the Desktop. (Sure, a few games have their neat multicore support...but it seems really sub par and not widespread)
 
Last edited:

Dave2150

Senior member
Jan 20, 2015
639
178
116
Well yea I guess I ignored that part of Desktop.


While performance is only creeping...The size and voltage of CPU is still VERY much being worked on.

The future is SoC (or so I'd like to believe) and in due time (well...years...k?) we might still have processors that are only somewhat faster than our current CPUs...but manage to have this speed at an absurdly low TDP while having the entire system-on-a-chip (hah).


So yea...in terms of performance unless there is some major breakthrough...that's pretty much it for some years except for the regular 5-10% IPC improvements.


I would really not mind having the current Intels' high end CPU and AMD/Nvidia GPU performance on a 10W passively cooled SoC that I can carry around and just hook up to a Monitor/Input device and do all my Desktop stuff with it. I guess that would actually be the death of the classic Desktop...but before that actually REALLY starts to phase out...we are probably going to have another decade of Desktop masterrace people with their huge PCs with 5 meter long GPUs.




But then again...do we really NEED more single core CPU power? I really do not think so. We need to get more efficient at using more than 1 core. We finally need to get to the point where 4C/8T CPUs can actually be utilized by any kind of high performance app. Not even games have managed this so far on the Desktop. (Sure, a few games have their neat multicore support...but it seems really sub par and not widespread)

All new PC games ported from the new consoles new more than one core already, they have done for quite a while already.

Some new games do actually use 4, and show performance advantages when benchmarking a 2 core vs 4 core in said games.
 

TheELF

Diamond Member
Dec 22, 2012
4,029
753
126
I agree. But there are a number of processes that benefit from more cores. F@H for one can utilize almost an infinite number of cores. You have to have 24 cores to even get the big units. And I am sure there are hundreds more examples that can and do use lots of cores.

F@H is a perfect example of parallelism,lots of initial data that can be split up in no matter how small parts that can be operated on without needing to have any information about any of the other small parts of the data.
 

TheELF

Diamond Member
Dec 22, 2012
4,029
753
126
But if they had got to 256, 1024, 4096 cores etc, at a low cost, available to buy, it would have been very tempting. Especially if useful software was starting to appear for it.

I challenge anyone to make a robot, which somewhat intelligently does things, including recognizing things it "sees". Moves about/walks/rides a bicycle. Etc. WITHOUT significant cpu abilities. I.e. WITHOUT using a human operator. Fully autonomous.

Yeah we gotta have to wait for quantum computers for stuff like that(neural network emulation) ,that's going to be awesome and all but you still will want to have a CPU (or at least a couple of cores) with high single speeds because just like the human brain that has a lot of neurons and can do a lot of stuff with lightning speed without even thinking about it,if you tell someone to calculate some numbers you are gonna wait a while.
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
Yeah we gotta have to wait for quantum computers for stuff like that(neural network emulation) ,that's going to be awesome and all but you still will want to have a CPU (or at least a couple of cores) with high single speeds because just like the human brain that has a lot of neurons and can do a lot of stuff with lightning speed without even thinking about it,if you tell someone to calculate some numbers you are gonna wait a while.

I presume, that if/when significantly (artificially) intelligent brains, become a reality. They will have some kind of built in, standard services.
Such as current time date, History books, Google search or similar information, powerful calculators (as you mentioned), etc etc.

A bit like my human brain, nowadays.
Example:
Lets say I am reading a complicated technical document, online.
Every now and then, I come across a word or two, that I have either never heard of, only partially understand or it simply reminds me that I am interested in it, and it makes me want to get updated information, about it, now.
So I immediately google it and/or use a calculator etc etc.

tl;dr
My brain is "almost", plugged into google. Or is it the other way, round ?
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I'm thinking the dual core with 4-way hyperthreading would have a smaller die size than 4C/8T quad core.

Also maximum single thread performance should be higher on a wider core than a narrower core (even with extra turbo on the narrower core factored in).

Wider core wont give you much. Maybe 5% if you are lucky. Again...software.
 

SPBHM

Diamond Member
Sep 12, 2012
5,076
440
126
http://anandtech.com/bench/product/287?vs=1260



Are you sure NOTHING has happened performance-wise since Sandy Bridge?

both these CPUs are unlocked, the 2600K can be OCed by 1GHz+, the 4790K a lot less, so final performance is closer than that, and apart from the 4790K all the Haswell CPUs are clocked a lot lower.

2600K is Q1 2011, if you compared Q1 2011 with 2007 you will understand why people are noticing a difference (65nm C2Q vs sandy bridge)

but it's clear that the focus now is on low power and IGP
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
I'm guessing, but 99.8% of Intels sales are non-K cpus.
0.2% are K cpus.

Intel get approximately the same money/profit from the K series of overclockable cpus, as the non-K ones.

Therefore it makes huge business sense for Intel to highly optimize their chips, for the server market, low power consumption mobile/laptop markets, and general consumer desktop cpus.

Given that they are trying to optimize other things, such as yield, power-consumption, profit, life-expectancy, reliability etc etc.

The 0.2% K overclockable cpus, with little or no extra profit from them, are way, way down on their priority list, I would think.

We really should be basing their cpu performance on the advertised (spec sheet) frequency/performance figures.

Overclocking is a bonus, which we have not really paid anything (much) for. If we get it great. If not, we can't really complain.

I.e. They say it works great at 3.5 GHz, and it does. Then that is all it should really be doing. Anything above that is a bonus, if you are lucky.

Analogy:
If you buy a TV which is claimed to last at least 10 years, and has a 10 year guarantee. Then you can only really complain, if it breaks, before it is ten years old.

It may last 11 years and fail. It may last 21 years, and still be working just great.
Part of it is luck. And part of it varies, as the TV manufacturer, changes the product, over the years.

tl;dr
Intel have been gently speeding up their cpus, over the last 5 years.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
This is probably a bit more revealing, 2600K vs 4770k.

http://anandtech.com/bench/product/287?vs=836



The thing is, even at the ~2-40% increase with maybe an average of 25 or 30%, this is with CPU intense synthetics. Most usage is not cpu intense, like me typing this post. We're much more limited by disk IO, USB speeds, GPU performance, and internet connection speeds.

Same thing is happening on phones. Maybe it doesn't show up in benchmarks yet, but I know a lot of people perfectly happy with iPhone 5 performance (A6), LG G2 performance (Snapdragon 800) etc etc. Even those who upgraded to say a G3 from a G2 / Samsung S5 from S4, or iPhone 6 from a 5 or 5s, don't really talk about speed bumps like they did in the iphone 4-4s-5, S3-S4 transitions. It's mostly about software features, displays, battery life, etc.

It's really the age of "good enough cpu speed" and "I want my battery to last longer, my display to look better, and my applications to load faster".
 

Dave2150

Senior member
Jan 20, 2015
639
178
116
This is probably a bit more revealing, 2600K vs 4770k.

http://anandtech.com/bench/product/287?vs=836



The thing is, even at the ~2-40% increase with maybe an average of 25 or 30%, this is with CPU intense synthetics. Most usage is not cpu intense, like me typing this post. We're much more limited by disk IO, USB speeds, GPU performance, and internet connection speeds.

Same thing is happening on phones. Maybe it doesn't show up in benchmarks yet, but I know a lot of people perfectly happy with iPhone 5 performance (A6), LG G2 performance (Snapdragon 800) etc etc. Even those who upgraded to say a G3 from a G2 / Samsung S5 from S4, or iPhone 6 from a 5 or 5s, don't really talk about speed bumps like they did in the iphone 4-4s-5, S3-S4 transitions. It's mostly about software features, displays, battery life, etc.

It's really the age of "good enough cpu speed" and "I want my battery to last longer, my display to look better, and my applications to load faster".

Why compare the 2600k to the 4770k, which was released june 2013?

Compare it to the 4790k. At stock it destroys it, not everyone overclocks you know.

Also there is x99 to consider, the 5820k is quite comparable to the 4790k in price, and completely annihilates the 2600k in performance.

Give it another 6 months and the DDR4 price will probably drop below DDR3 price, making x99/5820k only $100 more than the 4790k setup.

People are just fooling themselves if they think the 2600k is comparable to the x99 range, x99 is leaps and bounds ahead of it.
 

theeedude

Lifer
Feb 5, 2006
35,787
6,198
126
It's pretty much done for single thread software. Low hanging fruit is gone, and extracting more ILP is very expensive power wise for very small gain.
Sandy Bridge to Haswell is very incremental improvement clock for clock, considering Intel's R&D budget.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I'm thinking the dual core with 4-way hyperthreading would have a smaller die size than 4C/8T quad core.

Also maximum single thread performance should be higher on a wider core than a narrower core (even with extra turbo on the narrower core factored in).

Wider core wont give you much. Maybe 5% if you are lucky. Again...software.

5% gain in IPC is something Intel has already been doing with the small generational bumps in performance we have been getting.

I'm hoping higher gains than that should be possible, but the impression I have been getting from folks around here is that it performance per watt would drop too much with a substantial increase in IPC.

05.jpg


I'm hoping adding additional hyperthreads is one way Intel can get regain that performance per watt.

So a 2C/8T chip with the same multi-thread performance as 4C/8T, but with stronger single thread and a smaller die size sounds interesting to me.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
A bigger die is not equal to a wider core.

One of the easist thing is simply to add 15-20MB more L3 cache. Maybe double L2 as well.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
A bigger die is not equal to a wider core.

The concept of Pollack's rule is architecture enhancement, this leads to increased performance at a reduced performance per watt. An increase in die size comes with the architecture enhancement simply because a wider core is going to need more xtors.

With that mentioned, I have no idea how much further Intel core series can go with single thread increases. I do know the focus lately has been on mobile though (so it might be there is still a lot of extra room for single thread improvements, but the market doesn't demand it as much as it demands power efficiency).

The question I have: "Is multi-way hyperthreading something Intel can implement in a future wider core design to help satisfy both the push for mobile/high performance per watt and high single thread performance?"

If we take a look at those Haswell Core i3s the hyperthreading does an awesome job at making the use of the wider core. So it seems Intel is on to something good with that tech already.
 

Shehriazad

Senior member
Nov 3, 2014
555
2
46
As it stands...I'm not even sure if CPUs aren't going to disappear for Desktops in a few odd years (thinking like maybe a decade).

Intel is putting iGPUs on their CPUs, AMD is already walking toward APU/SoC only and Nvidia as well is dipping its toes into that in other markets. Semicustom, Consoles, Mobile are all walking toward it as well.

So while CPU as an architectural part of a chip will still exist in many years to come...I think the time of the Desktop CPU is somewhat ending.

Like I stated before..I wouldn't even mind being able to just buy a SoC that plays all the high end games and can do some semi-professional stuff as well with some low power draw and whatnot. So while CPU development is not "completed"...CPUs itself might be in the "near" future...if you catch my drift.



Sometimes I really just want to be able to fast forward 10 years to see where technology has arrived. But realistically speaking...once processes go down to like 5nm or so and it's been stabilized as a market...there would be a ton of room on those chips left unless you make the chips itself so super tiny that selling them as non BGAs ends up being sub-par. GPUs also will at some point finally shrink again in high end areas...but currently they're still stuck at their 28nm with GDDR5 crap...so that's gonna take a bit longer.

But a 5nm chip with the same die size of current chips? Could easily fit 8+ cores, a strong GPU, the chipset/IO and some HBM on there.
 
Last edited:

ninaholic37

Golden Member
Apr 13, 2012
1,883
31
91
People are just fooling themselves if they think the 2600k is comparable to the x99 range, x99 is leaps and bounds ahead of it.
You do know that "leaps and bounds ahead" means that it's growing rapidly, and making fast progress, right? I think you mean "moving at a snails pace". Maybe Intel thinks that if they can walk like a Turtle then they will live to be 200. :biggrin:
 
Last edited:

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
Why compare the 2600k to the 4770k, which was released june 2013?

Compare it to the 4790k. At stock it destroys it, not everyone overclocks you know.

Also there is x99 to consider, the 5820k is quite comparable to the 4790k in price, and completely annihilates the 2600k in performance.

Give it another 6 months and the DDR4 price will probably drop below DDR3 price, making x99/5820k only $100 more than the 4790k setup.

People are just fooling themselves if they think the 2600k is comparable to the x99 range, x99 is leaps and bounds ahead of it.

For one, the 4790 doesn't 'destroy' the 2600k on anything except flash. For another, the 4770 and 4790 are within spitting distance of each other. And finally, there are about 10x more comparison benchmarks if you choose i7-2600k vs i7-4770 than 4790.
 

Dave2150

Senior member
Jan 20, 2015
639
178
116
You do know that "leaps and bounds ahead" means that it's growing rapidly, and making fast progress, right? I think you mean "moving at a snails pace". Maybe Intel thinks that if they can walk like a Turtle then they will live to be 200. :biggrin:

"Leaps and bounds" can mean a few things, basically it's a huge improvement, a jump forward

I assume you're not a native English speaker, so I'll forgive you :)