Speculation: AMD's response to Intel's 8-core i9-9900K

Page 21 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

How will AMD respond to the release of Intel's 8-core processor?

  • Ride it out with the current line-up until 7nm in 2019

    Votes: 129 72.1%
  • Release Ryzen 7 2800X, using harvested chips based on the current version of the die

    Votes: 30 16.8%
  • Release Ryzen 7 2800X, based on a revision of the die, taking full advantage of the 12LP process

    Votes: 17 9.5%
  • Something else (specify below)

    Votes: 3 1.7%

  • Total voters
    179

ub4ty

Senior member
Jun 21, 2017
749
898
96
Again, its the fastest single threaded performance available. Autocad is still largely single threaded. Stop projecting your own expertise, and user needs and above ALL morals onto others. You say you have no bias yet you bring Intel's practices into the discussion?? Get a grip guy you need a time-out.

For MY needs the 8086K will bring the best possible performance I can find, for reduced prices once the 9900K hits the shelves.
You make a post that is highly personal accusing people of being negative without support. People are clearly intel/amd builders per their use case/workflows. So, your assertion is unfounded. You then pad in your personal processor choice. I ask you why you'd go with such a processor. You then go on w/ unfounded personal assertions again?

Some autocad operations are still largely single threaded. This speaks to dated legacy software. Other autocad operations are very multi-threaded thus perform great with more cores.

http://www.cadalyst.com/hardware/wo...yers-can-benefit-threadripper-38756?page_id=3

Get a grip... calm down.
1217-HoH-1.png
 
Last edited:

Spartak

Senior member
Jul 4, 2015
353
266
136
Finally? You were the one dumping on the 8086K as a dated processor. Even if I wasnt using CAD its one of the fastest chips for general workstation usage and probably the fastest for gaming (which I don't). Keep your attitude in check man.
 

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
I am thinking of swapping out my 8700k for a 9900k. Why? Just because I can.

I plan on doing exactly that. Do I need a 9900k? Nope, but I like having the newest, fastest hardware for my use cases.

I'll sell my 8700k and recoup most of the cost and it's a nice, easy drop in that should have the same temps as my current 8700k. Winning!
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Again, its the fastest single threaded performance available. Autocad is still largely single threaded. Stop projecting your own expertise, and user needs and above ALL morals onto others. You say you have no bias yet you bring Intel's practices into the discussion?? Get a grip guy you need a time-out.

For MY needs the 8086K will bring the best possible performance I can find, for reduced prices once the 9900K hits the shelves.

So why do you need a Core i7 8080K if it is single threaded - surely you should have got a Core i3 7350K since it was reduced quite a bit in price,or even a Core i3 8350K this generation?

Both have the same core design,so overclocked to 5GHZ should have broadly similar single threaded performance.
 

TheELF

Diamond Member
Dec 22, 2012
3,967
720
126
ARM owns mobile and embedded but is crossing over into the desktop/server cpu market. Their licensing model and overall structure ensured their dominance.
Arm has a very limited range of software that it can run efficiently,even more limited then the range for ryzen,even if they match intel i5 in some obscure benches they will only ever be useful in a very small niche.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
Arm has a very limited range of software that it can run efficiently,even more limited then the range for ryzen,even if they match intel i5 in some obscure benches they will only ever be useful in a very small niche.
In general yes.. getting down to specifics this has to do w/ software/OS. Native ARM instruction code runs just fine. The question is where this is big... It is big on mobile/embedded. Linux/MacoSX which are hardly niche.

There's still a lot of room for competition and developments.
I look forward to them. I hear intel is getting into discrete graphics GPUs by 2020 too. Fire under Nvidia and AMD's feet hopefully. GPUs definitely need a shakeup.

Everyone's crossing over into everyone else's markets and this is great for the consumer.
Proprietary barriers are slowly fading away.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
if we compare two 14nm chips around 90-95W TDP we see intel went from 4c/8t 4.0B/4.2T (6700K) to 8c/16t 3.6B/5.0T (9900K). That's an incredible performance leap on one node.

Three different 14nm nodes,

14nm for the 6700K (2015), 14nm+ on the 7700K (2016) and 14nm++ for the 8700K (2017).

14nmcharacteristic.png
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
Three different 14nm nodes,

14nm for the 6700K (2015), 14nm+ on the 7700K (2016) and 14nm++ for the 8700K (2017).

14nmcharacteristic.png

So, 2015 to present day... +26% performance for same power envelope.
Essentially nothing worth upgrading to. These aren't the 28nm to 14nm days. So, there's no reason to pretend like the performance jumps are like it. When you stop doing so, you realize the next battle has always been core count increases inline w/ software to take advantage of them. Thus why AMD set the pace for the next journey in computing by introducing affordable 8 cores to the desktop market and more. People want game changers not an incremental blood letting.
 

TheGiant

Senior member
Jun 12, 2017
748
353
106
Well no one except Apple delivered game changers (no I am not counting Ryzen because it is still IPC and freq behind).
Looking forward what Icelake delivers. I hope it wont be another P4 style of change.
 

TheELF

Diamond Member
Dec 22, 2012
3,967
720
126
ARM owns mobile and embedded but is crossing over into the desktop/server cpu market. Their licensing model and overall structure ensured their dominance.

It is big on mobile/embedded. Linux/MacoSX which are hardly niche.
We were talking about desktop/server where it will be niche, for servers because of all the alternatives available, xeon Phi etc and GPUs included and for desktop because the single thread performance will be pretty much non existent.
 

french toast

Senior member
Feb 22, 2017
988
825
136
- What is the emulation penalty of windows 32bit on ARM?..50%?

- Is it possible we see 64bit emulation to enable AAA games on ARM windows?.

- How high would a cortex A76 (with 4mb L3) need to clock to make up the emulation penalty compared to pinnacle ridge?, Is it feasible that the architecture could ever clock high enough to do this in acceptable tdp?.

- We have not seen advanced turbo with ARM CPU, this seems to be a weakness compared to x86 designs IMO, which have lots of headroom to push into and advanced features for this....has ARM got this kind of ability with its cortex architecture on ice, with only higher tdp needed to enable it?

- Apple replacing x86 with ARM is a near certainty IMO, would apple changing over encourage enough Devs to compile apps on ARM for Windows?(universal apps - including AAA games.)

- With intel encroaching into desktop GPU's, will Nvidia look to push a new variant of its Denver ARM architecture into windows? With a bespoke SOC.

Seems to me ARM will have the architecture to compete in a few years, but will depend on native software being compiled, if it does x86 will take a bruising.

These are the questions i am most interested in moving forward with ARM on windows, not expecting them all to be answered here, just my own thoughts.

Edit, Apologies posted in wrong thread.
 
Last edited:

eek2121

Platinum Member
Aug 2, 2005
2,904
3,906
136
I agree, but if you have a 1800x or a 7700k...9900k is a solid worthwhile upgrade, it will last you 3 years comfortably for general desktop use and gaming.
I will be surprised is even zen2 will offer more than 5-10% more FPS than a 9900k..ditto icelake.
Like I said, depends on price, nearly every CPU is a great CPU if priced right ;)

If you bought an 1800X you won't need to upgrade for quite a while. The 7700k is quad core, so while it will hold out for a bit, I expect it won't hold out as long as any 8 core CPU. Especially when the next generation of consoles roll out.
 

Spartak

Senior member
Jul 4, 2015
353
266
136
Three different 14nm nodes,

14nm for the 6700K (2015), 14nm+ on the 7700K (2016) and 14nm++ for the 8700K (2017).

14nmcharacteristic.png

How dense can someone be? Of course I mean three/four iterations of the same node. Look up what the definition of a node is. They're all the same node but the node itself has improved. That's exactly my postulation from the start. Of course you cannot improve your node substantially without any iterations...I can't believe I need to spell this out.

26% is already pretty impressive, and the 9900K will add a lot of performance compared to the 2017 version of 14++.

Again, people seem to misunderstand what my original point was. It's not a perfect situation that's bloody obvious. Intel is sleeping on improving their architecture and 10nm rollout is a disaster. But the team that's improving the 14nm process is doing a hell of a job.
 
Last edited:
  • Like
Reactions: french toast

french toast

Senior member
Feb 22, 2017
988
825
136
If you bought an 1800X you won't need to upgrade for quite a while. The 7700k is quad core, so while it will hold out for a bit, I expect it won't hold out as long as any 8 core CPU. Especially when the next generation of consoles roll out.
Well you can say that about most things, some people are rocking sandy bridge i7 and are more than happy.

For enthusiast's looking for worthwhile upgrades, 9900k is going to be a massive upgrade over a 1800x, especially for gaming.
Similar for 7700k.. mostly for all round use.
 

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,677
136
How dense can someone be?
Again, people seem to misunderstand what my original point was.
Don't know how dense AntenRa can be, but I can tell you conveniently forgot a few key facts: it has taken them 4 years to get here (Q4 2014 - Q4 2018), starting from an immature 14 nm node that made Broadwell look underwhelming. This node was a problem child just like 10nm is, they were probably close to ditching Broadwell, I can imagine there was immense pressure to deliver something they could work with. All these being considered, don't you think 14nm still had a lot of untapped potential even with the Skylake launch 3 years ago?
 
  • Like
Reactions: moinmoin

scannall

Golden Member
Jan 1, 2012
1,944
1,638
136
Don't know how dense AntenRa can be, but I can tell you conveniently forgot a few key facts: it has taken them 4 years to get here (Q4 2014 - Q4 2018), starting from an immature 14 nm node that made Broadwell look underwhelming. This node was a problem child just like 10nm is, they were probably close to ditching Broadwell, I can imagine there was immense pressure to deliver something they could work with. All these being considered, don't you think 14nm still had a lot of untapped potential even with the Skylake launch 3 years ago?
As you point out, 14nm wasn't all that good on release. And to their credit, they have done a great job polishing it over the last several years. That however also causes a problem. If 10nm actually starts to yield good, but doesn't clock nearly as well, can they actually release it? Enthusiasts anyway will raise all sorts of furor over that, if say (Not real numbers, just used for discussion) the best clocks are 4.2 Ghz.
 
  • Like
Reactions: coercitiv

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
I still think Intel is essentially skipping 10nm and the first volume releases will be 10nm+, or possibly even ++.

I recall hearing long ago that 14nm++ was actually better than the first 10nm.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
I still think Intel is essentially skipping 10nm and the first volume releases will be 10nm+, or possibly even ++.

I recall hearing long ago that 14nm++ was actually better than the first 10nm.
Exactly that on both counts. As it stands 10nm was a lose lose scenario between yields and clock, they would lose money (in comparison to 14nm++) not even counting implementation costs. Whatever comes out won't be that 10nm process. They might not refer to whatever the first major 10nm launch as 10nm+ or ++. But they are so far into the development of that process that they have to be at least one or two iterations deep. At least for 14nm they were able to correct before a major launch (even though they held back a little bit). Gave them ample time to continue to work out the process. I do wonder whats in store with 10nm. Can they continue after actual launch to improve it that much? How many years before they basically have to go to another node and did the time they spent trying to fix 10nm make it harder to hit the next one?
 

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,677
136
If 10nm actually starts to yield good, but doesn't clock nearly as well, can they actually release it? Enthusiasts anyway will raise all sorts of furor over that, if say (Not real numbers, just used for discussion) the best clocks are 4.2 Ghz.
With the performance monster they seem to have built in CFL 8c/16t, they will certainly have quite a muscle car to beat. Considering the core wars have scored their first casualty in the form of TDP/power consumption, I expect desktop enthusiasts to have very high expectations from performance while simply ignoring power usage improvements (efficiency too). Add the rumored solder in 9900K to the equation, and we'll have the perfect storm.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,478
14,434
136
With the performance monster they seem to have built in CFL 8c/16t, they will certainly have quite a muscle car to beat. Considering the core wars have scored their first casualty in the form of TDP/power consumption, I expect desktop enthusiasts to have very high expectations from performance while simply ignoring power usage improvements (efficiency too). Add the rumored solder in 9900K to the equation, and we'll have the perfect storm.
What casualty are you referring to ? My 2990WX OC'ed (a little) to 3.5 ghz all core takes 238 watts from the wall with a 1080TI at idle, but with all 64 cores at 100%load (Rosetta@home), so that sure isn't it.
 
  • Like
Reactions: Drazick

maddie

Diamond Member
Jul 18, 2010
4,722
4,625
136
Exactly that on both counts. As it stands 10nm was a lose lose scenario between yields and clock, they would lose money (in comparison to 14nm++) not even counting implementation costs. Whatever comes out won't be that 10nm process. They might not refer to whatever the first major 10nm launch as 10nm+ or ++. But they are so far into the development of that process that they have to be at least one or two iterations deep. At least for 14nm they were able to correct before a major launch (even though they held back a little bit). Gave them ample time to continue to work out the process. I do wonder whats in store with 10nm. Can they continue after actual launch to improve it that much? How many years before they basically have to go to another node and did the time they spent trying to fix 10nm make it harder to hit the next one?
I keep thinking that Intel needs the increase in density to deliver some relevant IPC increases. No matter how good 14++++++++++ can clock they are stuck on a old design. If my intuition is correct, then they're in a jam. Get 10nm or 10nm+ out and it still might not be an advance, as clocks could be lowered removing all IPC improvements.
 
  • Like
Reactions: french toast

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,677
136
What casualty are you referring to ?
(Some) Mainstream Intel boards using Z series chipsets and (some) mainstream AMD boards using X series chipsets no longer enforce CPU TDP by default. It has been widely discussed in several threads now.

The chips themselves, whether we're talking CFL, SKL-X, Ryzen, TR2 etc have extremely good power management support that would allow them to stick very close to rated TDP, but both Intel and AMD left power config up to mainboard makers. Mobo makers went full retard, cuz marketing.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
I keep thinking that Intel needs the increase in density to deliver some relevant IPC increases. No matter how good 14++++++++++ can clock they are stuck on a old design. If my intuition is correct, then they're in a jam. Get 10nm or 10nm+ out and it still might not be an advance, as clocks could be lowered removing all IPC improvements.
Slower clock + improved IPC would theoretically be of good benefit as you'd get the same or better performance at lower power levels. It would be great for mobile, anyway.
 

maddie

Diamond Member
Jul 18, 2010
4,722
4,625
136
Slower clock + improved IPC would theoretically be of good benefit as you'd get the same or better performance at lower power levels. It would be great for mobile, anyway.
What are you on about. Slower clocks on 14nm can't be compared to slower clocks on 10nm.

The slower clocks I'm talking about is when they migrate to 10nm. You're assuming that its more efficient or at least equivalent. It might not be, as Intel have used all tricks to increase the efficiency of 14nm+++++++++++++.

You might get, same power, slower clocks, increased IPC.
Result?
Stagnation.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
At this point, I think it's safe to say that Intel's 14nm++ is more efficient than their current 10nm efforts.