Speculation: i9-9900K is Intel's last hurrah in gaming

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Will Intel lose it's gaming CPU lead in 2019?


  • Total voters
    184
  • Poll closed .

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
If TSMC barely have the capacity for AMD right now (along with Apple and Huawei), how on Earth would you suggest that they'd also be able to supply Intel? Bear in mind that even AMD's needs are going to be increasing with next-gen consoles from Sony and Microsoft to come on 7nm too.

If Intel needed to fab with someone else, they'd probably just buy the technology and install it at their own facilities. Process maturity will allow for greater capacity, but unless TSMC builds additional facilities, or purchases a fab from someone else and retrofits it with their own technology, they wouldn't be able to handle more than a tiny fraction of Intel's business.
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
If Intel needed to fab with someone else, they'd probably just buy the technology and install it at their own facilities. Process maturity will allow for greater capacity, but unless TSMC builds additional facilities, or purchases a fab from someone else and retrofits it with their own technology, they wouldn't be able to handle more than a tiny fraction of Intel's business.

They have a lot invested in their 10nm process, which it seems is more complicated than the 7nm process TSMC and Samsung are using, and is the first to use cobalt interconnects, etc. IMO they'd be stupid to just drop what they felt was a better manufacturing route. The delays suck, and as much as I love AMD, I think that when Intel bumps over to 10nm, if they maintain an edge on IPC and bus speed, while also cram more transistors or more compute cores on each dye, they're going to continue to do well again AMD.
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
Presumably they've got 10nm working in some capacity now that we're starting to see it appear on roadmaps, but we're not quite sure of how it will perform. The only 10nm chip they've released so far was a disappointment to say the least. Perhaps things have changed with the delays, but previously Intel itself didn't expect performance on 10nm to be as good as the latest refinement of 14nm. It's likely that 10nm will just be a bit of a lame duck for them, but there's no reason to think that they won't be in a much better position by the time their 7nm node comes online.
 

Asterox

Golden Member
May 15, 2012
1,026
1,775
136
Who knows, maybe Intel will make a comeback with a kickass 7nm CPU.

(TSMC 7nm, that is.)

No and for several reasons, one tiny reason is AMD no longer sleeps. This is a wider topic, and this is not the place to go further.

In short, a chance for Intel to do that is about the same as Lance Armstrong doing again all things that he do before his very famous "special event".

 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
Honestly everything about Intel's 10nm so far looks like the sunk cost fallacy at work.

They need something to replace their 14nm node, and 7nm still wouldn't be ready even if 10nm had gone more smoothly. We could argue up and down over whether this is really Intel's 10nm or if so much has been replaced, that the only thing it has in common is the name, but it doesn't really matter. It's hardly the first time in the company's history that they've had a process node with a rocky start, it's just that 10nm has been especially bad, probably the worst.

The only alternative to fixing it is cancel it and try to ride things out until they have 7nm ready, and considering they already have production shortages, I'm not particularly sure that's a better solution. Sure 10nm has some costly mistakes, but as long as Intel learns from them it's hardly a waste.
 
  • Like
Reactions: CakeMonster

maddie

Diamond Member
Jul 18, 2010
4,740
4,674
136
Honestly everything about Intel's 10nm so far looks like the sunk cost fallacy at work.
It might be a fallacy, but damn, it's a powerful force.

Just a little further fellas, we're almost at the top.
 

moinmoin

Diamond Member
Jun 1, 2017
4,950
7,659
136
They need something to replace their 14nm node, and 7nm still wouldn't be ready even if 10nm had gone more smoothly. We could argue up and down over whether this is really Intel's 10nm or if so much has been replaced, that the only thing it has in common is the name, but it doesn't really matter. It's hardly the first time in the company's history that they've had a process node with a rocky start, it's just that 10nm has been especially bad, probably the worst.

The only alternative to fixing it is cancel it and try to ride things out until they have 7nm ready, and considering they already have production shortages, I'm not particularly sure that's a better solution. Sure 10nm has some costly mistakes, but as long as Intel learns from them it's hardly a waste.
All true, but the biggest mistake has to be still proceeding with the planned allocation of fabs (that's now being partially reverted) which resulted in the lack of fabs for 14nm and the current chips shortage. Intel had been overly optimistic without any fail-safe at all. They are finally slowly adapting (like making their designs independent from their nodes) but it's all plenty late.
 
  • Like
Reactions: Tlh97

DrMrLordX

Lifer
Apr 27, 2000
21,629
10,841
136
If Intel needed to fab with someone else, they'd probably just buy the technology and install it at their own facilities. Process maturity will allow for greater capacity, but unless TSMC builds additional facilities, or purchases a fab from someone else and retrofits it with their own technology, they wouldn't be able to handle more than a tiny fraction of Intel's business.

I've been thinking about that for awhile. Intel has a LOT of fabs. If GF can license processes from Samsung, then certainly Intel could license one from TSMC. Especially a mature process that TSMC is planning to render obsolete in a few months or so.

What does Intel Silicon and Lance Armstrong have in common? Doping!

Heyooooooooooo

(apologies to Ed McMahon)
 

Asterox

Golden Member
May 15, 2012
1,026
1,775
136
I am looking at the tittle of this thread, well it is very hard not to stick this short gaming comparison.So 3.5ghz 8/16 Sandy Bridge vs Skylake 5ghz i9 9900K.

"Somehow i understand", why he did not lower Skylake CPU frequency to the poor 3.5ghz.With such a move, that video it should be labeled with Viewer discretion is advised. ;)

 
Last edited:

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
All true, but the biggest mistake has to be still proceeding with the planned allocation of fabs (that's now being partially reverted) which resulted in the lack of fabs for 14nm and the current chips shortage. Intel had been overly optimistic without any fail-safe at all. They are finally slowly adapting (like making their designs independent from their nodes) but it's all plenty late.
About 10nm?
Because it seems more like they where overly pessimistic about 14nm,they thought that they would stop being able to sell them after ryzen came out but alas, 4% increase (YoY) in desktop sales for q1 2019 even with ryzen 2 close to release.
So yeah of course they will ride it out as long as possible and continue to sell 14nm,you only bring out a new product if your old product doesn't sell (enough) anymore.
 

B-Riz

Golden Member
Feb 15, 2011
1,482
612
136
Honestly everything about Intel's 10nm so far looks like the sunk cost fallacy at work.

The funniest / saddest thing I read about Intel's 10nm failures was that they refused help from the tool makers to fix the errors in their process.

The same errors the tool makers helped fix in the other fabs work to 7nm.
 

TheGiant

Senior member
Jun 12, 2017
748
353
106
I am looking at the tittle of this thread, well it is very hard not to stick this short gaming comparison.So 3.5ghz 8/16 Sandy Bridge vs Skylake 5ghz i9 9900K.

"Somehow i understand", why he did not lower Skylake CPU frequency to the poor 3.5ghz.With such a move, that video it should be labeled with Viewer discretion is advised. ;)

he didn't test anything..
CPU testing in running around scenes 95% of benchmarking time... pure incompetence
 

TheGiant

Senior member
Jun 12, 2017
748
353
106
The funniest / saddest thing I read about Intel's 10nm failures was that they refused help from the tool makers to fix the errors in their process.

The same errors the tool makers helped fix in the other fabs work to 7nm.
any links here? interesting stuff
 

RaV666

Member
Jan 26, 2004
76
34
91
Forst of all, i am an amd guy, i have a vega card, and i would have a ryzen cpu already if amd did not screw me over with fx series, there were supposed to be 10 cores and steamroller but all of it was cancelled and i wanted more performance.But as i have a 5960x now, it would be pointless to get ryzen 1st or 2nd gen ryzen.
And i am getting 3rd gen ryzen, be it TR3 or ryzen itself.
I read 1st post and the poll and i think they are somewhat misleading (especially that forza screen).
As it stands, amd already has a good gaming cpus for like 90% of people, intel gets ahead however if you eliminate the gpu bottleneck so from like 1080Ti forward and 144hz screens it has real advantage, and pretty substantial one, up to 25% or so.
So if you are in the high performance gpu settings (which most people are not even if they have a 2080Ti but for example play at 4k) intel is the way to go.
But the topic paints it as if theyre doomed or something, even if we assume pretty optimistic situation, so AMD can get their 8 core/12core (games dont really care about more than 8 cores and even thats stretching it) at higher clocks, and they get 10% "IPC" gain in gaming situations, they would at best match 9900K.And theres still thing about the memory, intel goes far beyond 4ghz, amd cant do that, and even if they get to 4000MT/s ram, still slower.
On top of that, intel already is realeasing something faster, yea , its gonna be hot.But comet lake 10C is coming, screw the cores, but they will also get bigger L3 cache.And games are all about latency.Intels best get like 35ns, ryzens do 60 at best, theres very low possibility ryzen 3000 with its decoupled IO will get much better.
Reality is, they should still have higher clocks, and lower latencies and faster ram support, and thats a recipe for a fast gaming cpu.
I dont really care, i want more cores because gaming perf for me is going to be the same with either one (i prefer higher resolutions/details than the 144hz+ gaming and i still have 60hz screen).But i really doubt amd is gonna "dominate" intel in gaming.
 
  • Like
Reactions: Tlh97

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
I posted this in the gaming forum but seriously Shadow of the Tomb Raider even in the benchmark is CPU limited for me.

This is on a 7700k at 5GHz. This is with a 2080ti at 4K. The render CPU is showing frame times in the 20ms range in the 3rd scene. So I am CPU limited in parts to under 60fps.

The games gives you graphs to see.

How can this be? The game runs on consoles.

Also let’s not even begin to mention AC:Origins or Odyssey. It seems no CPU on earth can handle those titles. I’m talking about gaming at 4K.

Apparently all this gets worse if using DX11. How much CPU do we really need? Can we even get enough?
 

Guru

Senior member
May 5, 2017
830
361
106
most newer games are actually performing better on Ryzen 2000 cpu's rather than intel's, as devs take advantage of more cores and implement amd based optimizations.
 

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
I posted this in the gaming forum but seriously Shadow of the Tomb Raider even in the benchmark is CPU limited for me.

This is on a 7700k at 5GHz. This is with a 2080ti at 4K. The render CPU is showing frame times in the 20ms range in the 3rd scene. So I am CPU limited in parts to under 60fps.

The games gives you graphs to see.

How can this be? The game runs on consoles.

Also let’s not even begin to mention AC:Origins or Odyssey. It seems no CPU on earth can handle those titles. I’m talking about gaming at 4K.

Apparently all this gets worse if using DX11. How much CPU do we really need? Can we even get enough?
http://advances.realtimerendering.com/destiny/gdc_2015/Tatarchuk_GDC_2015__Destiny_Renderer_web.pdf
How games used to run multithreaded.
Each core or thread just did only as much work as it needed to do so the better your CPU the more idle resources would be left over for mutlitasking.
XjqVeF7.jpg




This is what they do now,deferred means they run the same things on all cores just so they can get the freshest frame from the GPU,it doesn't matter how much or how little CPU power your CPU actually needs to get the framerate you want, all your cores will be running at full load all the time.
Look at the cinematics of origins since you mentioned that they are locked to 30FPS and even if there are just two characters in a closed space you still get 100% CPU usage.

mNeUE1H.jpg
 
  • Like
Reactions: Tlh97 and sxr7171

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
http://advances.realtimerendering.com/destiny/gdc_2015/Tatarchuk_GDC_2015__Destiny_Renderer_web.pdf
How games used to run multithreaded.
Each core or thread just did only as much work as it needed to do so the better your CPU the more idle resources would be left over for mutlitasking.
XjqVeF7.jpg




This is what they do now,deferred means they run the same things on all cores just so they can get the freshest frame from the GPU,it doesn't matter how much or how little CPU power your CPU actually needs to get the framerate you want, all your cores will be running at full load all the time.
Look at the cinematics of origins since you mentioned that they are locked to 30FPS and even if there are just two characters in a closed space you still get 100% CPU usage.

mNeUE1H.jpg


I see now. Finally it all makes sense. I couldn’t understand that kind of crazy CPU usage.

With Shadow of the Tomb Raider I am in fact CPU limited in the CPU render side. It gets even worse if I enable Ray Traced Shadows. The benchmark shows frame times of 20ms without Ray tracing and 26-28ms with ray traced shadows. These are cpu render times. CPU game stays under 16ms. The GPU also stays under 16ms without Ray tracing but with ray tracing it also can hit 28ms.


These CPU render limitations occur in cities with lots of people to render.

In AC origins performance issues also occur in cities. In that case I’ll accept that the GPU is the limiting factor until I test it again.


I just tested it. The CPU frametime very rarely exceeds 16ms. I mean maybe once a benchmark run. Also no single core ever goes beyond 82%. I can safely say I am not CPU limited in that game.


Apart from possibly the Tomb Raider issue I am convinced I’m not CPU limited. This also means for me that if AMD cannot demonstrate low latency then I’ll stick with Intel. Also it means that the next gen NUC with a x16 slot would make a perfectly good gaming machine for me. Especially with a 3rd party cooling case.
 
Last edited:

Guru

Senior member
May 5, 2017
830
361
106
Forza Horizon 4, Forza Motorsport 7, BF1, BF5, Fary Cry 5, etc... Again it depends game to game, and Intel's CPU's do have a clock speed advantage, but again more and more games are starting to utilize more cores and it's starting to have an effect.

If you look at older games like Fallout 4, Batman Arkham asylum, Metro redux, Project Cars, Far Cry 4, etc... there is a clear trend of games utilizing more cores and performing better and better on the Ryzen platform.

Again Intel still has the lead, no one is denying that and they will have the lead as long as AMD's memory latency is higher, so hopefully they are able to address that with this new chiplet design.
 
  • Like
Reactions: Tlh97