• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Speculation: i9-9900K is Intel's last hurrah in gaming

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Will Intel lose it's gaming CPU lead in 2019?


  • Total voters
    184
  • Poll closed .

TheELF

Diamond Member
Dec 22, 2012
3,117
322
126
And the comparison is flawed. If you run a dos program from 2000 it will run incredibly fast. It is the same as for your program fighter maker.
Isn't that want people want?! I know I want incredibly fast games.
I wonder if a thread stalls, that the measuring program you use is capable of detecting that.
It measures IPC that's the amount of instructions that get retired,a stall is nothing it executes nothing and nothing is being done so nothing is being retired.
A stall will show up in CPU utilization because although nothing has been done also nothing can be done in it's place it's a wasted cycle if that was the only thing executed.

And even if the lower IPC was due to stalls and contex switches how does that make it better?That's even worse because it proves very badly written code.
 
May 11, 2008
18,310
829
126
Isn't that want people want?! I know I want incredibly fast games.

It measures IPC that's the amount of instructions that get retired,a stall is nothing it executes nothing and nothing is being done so nothing is being retired.
A stall will show up in CPU utilization because although nothing has been done also nothing can be done in it's place it's a wasted cycle if that was the only thing executed.

And even if the lower IPC was due to stalls and contex switches how does that make it better?That's even worse because it proves very badly written code.

Besides form the question if PCM is capable of recognizing thread stalls...

That may be, but the 3d game with low IPC in your example uses a lot of floating point math.
And (Iam speculating) probably that game from 2002 uses a lot of integer math because the cpus in 2002 were less powerful and have small routines running from cache to avoid the slow front side bus used at the time.
So the question then comes how much cycles on average are needed for a given set of integer instructions and for a given set of floating point instructions.
I am willing to bet that for simple instructions like integer instructions, that these are ideal to reach an high ipc. And for floating point instructions it is more difficult because more than often dependencies arise.

It then also depends on the measurement being done.
Of course performance counters are used and instructions are counted. But how they measure the exact stream of instructions :
I am sure that in one of the sources here it is explained.
https://github.com/opcm/pcm

I am very curious.
How does PCM work exactly ?
How can a user appoint a certain given set of instructions in a thread ?

As a side note, John Carmack was a master in optimizing. If i am not mistaken he was very good at developing algorithms that mainly used binary functions like for example AND ,OR,XOR and shifting to get 3d integer functions that did the same as when done purely in floating point math.
https://en.wikipedia.org/wiki/Fast_inverse_square_root
It is not unlikely that many games during the 2002 timeframe made use of these features.
Now since there is a lot more brute force and the image quality has gone up enormously , the better accuracy of pure floating point is needed.

And even then, it is all about dependencies and what can run in parallel independently.
You really have to look then what load store and integer and floatingpoint instructions can be done at the same time. And this is for every cpu a bit different.
 
  • Like
Reactions: ryan20fun

maddie

Diamond Member
Jul 18, 2010
3,286
2,062
136
No matter why or what,the bottom line still is that games only use a very small fraction of the available IPC of a desktop CPU.
If there is any recompiling then it doesn't change a thing about it,when a game runs 4 or more threads to only use a fraction of the IPC that one core is capable of then there is something very wrong.
How do you explain the crappy game using so much more IPC? It's made with 2D Fighter maker 2002 a game engine from 2002 so we can be pretty sure that it runs the most generic most crappy code in existente yet it's capable of using all the IPC of a core because it ws made for PC and PC only.
There is no other explanation I can think of then that modern game engines produce code that is meant to run on very weak (low ipc) cores be it jaguar or arc or whatever.
Strangely, I always thought the opposite was true. The older engines needed to extract as much as possible from the CPU, seeing how much weaker they were. I'm still amazed by the early gaming code. The newer engines introduced a lot of easier to design changes that were not necessarily the best performing, but allowed quicker overall development. It relied on processors improving, just like code size bloat relied on ever larger ram modules.

Sure you're not constructing a fictional past to prove your argument?

Edit: I saw after posting William made the relevant comments.
 
  • Like
Reactions: ryan20fun

Carfax83

Diamond Member
Nov 1, 2010
5,885
577
126
One of the reasons they went with x86 cores was to not have to port the games anymore(at least the cpu part) and that is what is happening,we are running jaguar constrained IPC code on hundreds of $ worth of CPU and wonder why we get far less FPS then what we used to get years ago.
This is news to me. I'm getting way higher framerates than the console versions on my machine. Just one example is Doom. On the consoles it doesn't hold a steady 60 FPS even with adaptive resolution, but on my machine, I can easily hit the framerate cap at 200 FPS on max details at 1440p.

On Wolfenstein TNC, the 200 FPS cap was removed with the updated IdTech 6 engine and the framerates can easily hit over 200 FPS at max settings whereas none of the consoles can hit a steady 60 FPS.
 
  • Like
Reactions: Vattila

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,518
96
The lack of faith in AMD is disturbing but fairly warranted at this point. For the most part they are officially the budget cpu brand.Hoping they pull some magic in both cpu and gpu departments and offer some solid competition. At least Ryzen offers playable frame rates with their 1600 and up chips and at a good price too.I almost bought one but sitting with a 144hz monitor i really need the IPC advantage of the i5 8400 to get me closer to that goal.

I did little to no reading on the I9-9900k review wise,does it offer anything for games or would the i7 8700k be just as good?
 

epsilon84

Senior member
Aug 29, 2010
995
704
136
The lack of faith in AMD is disturbing but fairly warranted at this point. For the most part they are officially the budget cpu brand.Hoping they pull some magic in both cpu and gpu departments and offer some solid competition. At least Ryzen offers playable frame rates with their 1600 and up chips and at a good price too.I almost bought one but sitting with a 144hz monitor i really need the IPC advantage of the i5 8400 to get me closer to that goal.

I did little to no reading on the I9-9900k review wise,does it offer anything for games or would the i7 8700k be just as good?
It's kinda like the challenger trying to usurp the reigning champion - the odds are always against the challenger. AMD has to prove it can provide the necessary gains in IPC, frequency and latency reduction to match, let alone leapfrog the 9900K. By my estimates, that would require a +15% to 20% uplift in all 3 metrics to draw level, though you can argue IPC is tied to latency anyway, particularly in a gaming sense - lower latency will invariably improve 'gaming IPC'.

The 9900K is only marginally better than the 8700K in games, mostly due to higher stock clocks and a larger L3 cache - I don't think the extra cores do much for the majority of games.
 
Last edited:
  • Like
Reactions: Tlh97

naukkis

Senior member
Jun 5, 2002
353
183
116
And even if the lower IPC was due to stalls and contex switches how does that make it better?That's even worse because it proves very badly written code.
You want more performance from whole cpu not more ipc from one core. If you split routine from one thread to four it's inevitable that IPC per core is going down as your routine has to be synced between four threads. What matter that this routine can still be executed faster than with one thread. Also when optimizing for SIMD IPC goes down but performance goes up.

Optimizing code to take advantage from many cores is difficult but your solution to go back to one threaded code isn't solution either.
 

DrMrLordX

Lifer
Apr 27, 2000
16,322
5,251
136
The lack of faith in AMD is disturbing but fairly warranted at this point. For the most part they are officially the budget cpu brand.
Sorry, but that's the exact opposite of what they are right now. AMD's low-end CPU business may as well not exist. Good luck getting AMD is a commodity OEM box. AMD is all about the server and workstation market, with mid-to-high-end desktop as an afterthought.
 
  • Like
Reactions: Tlh97 and coercitiv

PotatoWithEarsOnSide

Senior member
Feb 23, 2017
664
700
106
I agree.
At basement prices you cannot find an AMD product that would be recommended. Admittedly, they don't have the most expensive at the performance oriented users, but not charging exorbitant prices does not equate to being the budget brand, otherwise everyone bar 9900k or HEDT constitutes a budget user.
AMD's product stack is definitely top end of mainstream. There's a whole bunch of folk hoping that they would cater to the lower end too, especially with a competitive APU; any Ryzen/Vega APU would wipe the floor with Intel if only they could a) produce it in quantity at a competitive price, and b) not be pushed out of OEMs due to anti-competitive practices.
 

TheELF

Diamond Member
Dec 22, 2012
3,117
322
126
Strangely, I always thought the opposite was true. The older engines needed to extract as much as possible from the CPU, seeing how much weaker they were. I'm still amazed by the early gaming code. The newer engines introduced a lot of easier to design changes that were not necessarily the best performing, but allowed quicker overall development. It relied on processors improving, just like code size bloat relied on ever larger ram modules.

Sure you're not constructing a fictional past to prove your argument?

Edit: I saw after posting William made the relevant comments.
I was going with the idea that hand crafted code would be better then code form a game engine also I was betting on the idea that coding in the last 16 years would have improved at least somewhat hence the code from the game engine would be generic and un optimized.
Also what's with the needed? "The older engines needed to extract as much as possible from the CPU" What is this?So today we don't care if games run terribad because we can afford to overspend on very expensive hardware? Either you have optimized code or not and today's console games don't have optimized code be it due to low IPC or otherwise bad coding.

This is news to me. I'm getting way higher framerates than the console versions on my machine. Just one example is Doom. On the consoles it doesn't hold a steady 60 FPS even with adaptive resolution, but on my machine, I can easily hit the framerate cap at 200 FPS on max details at 1440p.

On Wolfenstein TNC, the 200 FPS cap was removed with the updated IdTech 6 engine and the framerates can easily hit over 200 FPS at max settings whereas none of the consoles can hit a steady 60 FPS.
Is that on your CPU in the signature?
i7 6900K @ 4.3ghz
Are you proud that 4 times the clocks give you 4 times the FPS?That right there means that there is zero difference between the cores (IPC the threads are using) other then clocks.
At least no one can tell just by those examples,hey why don't you use affinity to let the game only use 6 cores and power settings to limit the execution to ~1Ghz and tell us what kind of FPS you will get then?Will you still outperform the jaguar?
You want more performance from whole cpu not more ipc from one core. If you split routine from one thread to four it's inevitable that IPC per core is going down as your routine has to be synced between four threads. What matter that this routine can still be executed faster than with one thread. Also when optimizing for SIMD IPC goes down but performance goes up.

Optimizing code to take advantage from many cores is difficult but your solution to go back to one threaded code isn't solution either.
No I don't want to go back to one threaded code but if each thread runs so little code you can join code to run on less cores,that way if you have more cores you will get even more performance or at least you will get more idle CPU to run your background tasks,what's wrong with that?
 

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,518
96
Sorry, but that's the exact opposite of what they are right now. AMD's low-end CPU business may as well not exist. Good luck getting AMD is a commodity OEM box. AMD is all about the server and workstation market, with mid-to-high-end desktop as an afterthought.
Your right for sure, what i meant was in rare performance for gaming. The 1600 is priced very well and competes sure but at the end of the day gamers usually are recommending a cpu over twice the cost cause its simply offering many more frames.I am sure atm for 4k gaming a 1600 and a 9900k will offer the same experience due to the gpu being the bottleneck unless you sli then i figured your running the intel anyways.
 

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,518
96
It's kinda like the challenger trying to usurp the reigning champion - the odds are always against the challenger. AMD has to prove it can provide the necessary gains in IPC, frequency and latency reduction to match, let alone leapfrog the 9900K. By my estimates, that would require a +15% to 20% uplift in all 3 metrics to draw level, though you can argue IPC is tied to latency anyway, particularly in a gaming sense - lower latency will invariably improve 'gaming IPC'.

The 9900K is only marginally better than the 8700K in games, mostly due to higher stock clocks and a larger L3 cache - I don't think the extra cores do much for the majority of games.
So pretty much we are at a point we can all settle down with a i7 8600k and just wait for intel to offer a platform with a big enough IPC boost to warrant a upgrade from?So would be like me after 7 years moving from a i5 with 4 cores and ddr3 to the 6 core and ddr4. I saw incremental upgrades year after year right till i saw all the praise with the i5 8400 and i7 8600k chips. Could thank games like BF1 for finally pushing the envelope after all these years.
 

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
20,510
8,379
136
Sorry, but that's the exact opposite of what they are right now. AMD's low-end CPU business may as well not exist. Good luck getting AMD is a commodity OEM box. AMD is all about the server and workstation market, with mid-to-high-end desktop as an afterthought.
I don't see that. The 2200u and 2400 are quite capable, not sure how you can overlook those. Not the mention the rest of the Ryzen low end chips.(2200g and 2600 just to name a couple)
 
  • Like
Reactions: Tlh97 and Drazick

Carfax83

Diamond Member
Nov 1, 2010
5,885
577
126
Is that on your CPU in the signature?
i7 6900K @ 4.3ghz
Are you proud that 4 times the clocks give you 4 times the FPS?That right there means that there is zero difference between the cores (IPC the threads are using) other then clocks.
At least no one can tell just by those examples,hey why don't you use affinity to let the game only use 6 cores and power settings to limit the execution to ~1Ghz and tell us what kind of FPS you will get then?Will you still outperform the jaguar?
Now you're goal post shifting. You implied earlier that console ports had less performance than they did years ago, which is the exact opposite of my experience. Since the PS4 and Xbox One hit the deck, game performance increased significantly compared to how things were back in the Xbox 360 and PS3 days.

I gave you two examples of how my PC (yes it has a 6900K @ 4.2ghz) is able to hit very high FPS in games like Doom and Wolfenstein TNC, where the console versions cannot even maintain a stable 60 FPS. In my opinion, the biggest factors for game performance on PC are:

1) The type of 3D engine the game uses. Some engines are just much better than others.

2) The graphics API

3) The skill at which the renderer is implemented

Those three factors count the most. Even if I downclocked my CPU to 1.2ghz in Doom or Wolfenstein TNC, I'd still be able to maintain at least 60 FPS because those games use Vulkan and the implementation is amongst the very best in the industry. Doom and Wolfenstein 2 would simply scale the renderer across all (or nearly all) 16 threads and with the low overhead, shouldn't have a problem hitting and maintaining the 60 FPS threshold.

Now compare that with say Crysis, which used DX10 and only a single thread for all the rendering. I'd need a Skylake class CPU or better running at around 5ghz to guarantee 60 FPS at all times most likely.
 
  • Like
Reactions: coercitiv and IEC

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,518
96
I don't see that. The 2200u and 2400 are quite capable, not sure how you can overlook those. Not the mention the rest of the Ryzen low end chips.(2200g and 2600 just to name a couple)
Yeah before i bough my budget desktop at the time which had a g1820,8gb of ram and a 120gb ssd with a cx430 psu I really gave thought to a 2200g build.It may sense for what i was into playing at the time.But $100 for this tower was good and it had upgrade potential. :p

The build has changed dramatically in the last few months :) It is still changing and the H81 platform going buh bye its on craigslist soon :) Will sell for peanuts prob idk.
 

DrMrLordX

Lifer
Apr 27, 2000
16,322
5,251
136
I don't see that. The 2200u and 2400 are quite capable, not sure how you can overlook those. Not the mention the rest of the Ryzen low end chips.(2200g and 2600 just to name a couple)
They have almost no presence in OEM desktop boxes. Try ordering a bunch of cheap AMD office PCs/AiOs for an organization and getting those from Dell or Lenovo. Hard to do. AMD shows up a little more in the laptop space.

Those OEM machines may not interest us, but those are the true "low end" where companies like Intel make bank through volume moreso than margin. AMD hasn't really competed in that space seriously since 2016. It shows. Interestingly enough, it's getting hard(er) to get those chips from Intel now thanks to their wafer shortage problems.

Your right for sure, what i meant was in rare performance for gaming. .
Somewhat more true. I still think it's funny that people see an R5 1600 as a cheap, low-end solution to gaming. Oh how the market has changed. Regardless, good luck finding anything from AMD sub-$100 for the DiY gamer market. I think the 2200g and R3 1200 are both right at $100. The 2200g falls in line with the midrange pricing of their older Kaveri lineup (you used to be able to get a Kaveri for less than $100, even one you could OC). Only the 200GE comes in under $100. AMD used to have a whole slate of DiY CPUs in the $40-$100 range. That's mostly gone. AMD's ASP on their DiY lineup is higher, and their ASP on their OEM product offerings is definitely higher (as compared to two years ago).

 
Last edited:
  • Like
Reactions: coercitiv

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,518
96
Somewhat more true. I still think it's funny that people see an R5 1600 as a cheap, low-end solution to gaming. O

I wouldn't say cheap or low end i mean it certainly can work for 4k where games are usually just gpu bond no matter what gpu is powering that resolution. It still has a market and coming at nearly $60 cheaper that $60 can cover a 250gb Samsung evo ssd . Kind of a big deal if i was on a budget building a rig.Cheap would be the 4 core AMD chips that like the i5 4 core are becoming useless for modern gaming.

Its just max fps and someone like me who plays older games who finds the i5 8400 more appealing cause of the IPC.If i was building a pure BF1 and onwards gaming box the R5 1600 certainly would be the choice over the i5 8400 in a second.

Remember years ago when the forums were flooded with E8400 vs Q6600 threads? We are at it again in a sense. Took long enough too.
 

TheELF

Diamond Member
Dec 22, 2012
3,117
322
126
Those three factors count the most. Even if I downclocked my CPU to 1.2ghz in Doom or Wolfenstein TNC, I'd still be able to maintain at least 60 FPS because those games use Vulkan and the implementation is amongst the very best in the industry. Doom and Wolfenstein 2 would simply scale the renderer across all (or nearly all) 16 threads and with the low overhead, shouldn't have a problem hitting and maintaining the 60 FPS threshold.
That's why I said use affinity to lock the game to 6 cores (or max 8) my whole point is that we run straight up jaguar code at jaguar level of IPC which means that at the same clock speed and same number of threads you should be getting the same amount of FPS as the consoles.
I know that games can scale much better now and I'm glad that they do,but using tons more CPU for something that should use much less CPU is not something I would call a good thing.
 

sxr7171

Diamond Member
Jun 21, 2002
5,079
39
91
Sorry I know this thread was dead.

Recently I have been thinking about this. I wanted to upgrade my machine since I found out the Shadow of the Tomb Raider is actually CPU limited in parts.

I looked into the 9900k it doesn’t look like a good proposition. In fact the 8700k is a better fit.

Then I looked at the roadmap. I learned that 10nm isn’t going coming to the desktop until 2022. This company is going the way of GE. Process lead was all they really had and that’s gone. Any attempt to get into GPUs or wireless modems has been a total failure.

So all these years I never considered AMD. I’d look at a benchmark and see they always lag. Then I learned about Zen 2. Looks promising. But like every AMD processor it comes with a lot of hype.

This one looks like a winner. For once they matched and exceeded Intel. Of course we don’t know for sure. I also suspect a huge fly in the ointment that while it rules cinebench it might not do so well in gaming. There has to be some latency associated with its architecture.

I guess we’ll wait and see. But it’s not looking good for Intel right now.

I use an eGPU in one of my rigs. So I could actually work off a future 10nm laptop CPU in something like an NUC.

Then again the whole world is moving towards laptops and SFF PCs. If Zen 2 and 3 don’t pan out for gaming I suspect we might end up using those H processors for gaming since desktop is going to be on 14nm for 3 more years. How much can they beat a dead horse?

They are moving towards just being a commodity product. Nothing good for gamers or enthusiasts. Just put all the innovations into business laptops and business SFFs. I think gamers will likely not be with Intel if this is how they want to play it.
 
Feb 4, 2009
26,693
7,225
136
I may be *completely* wrong but my opinion is that Intel "struggles to keep up with the development stride of AMD" not because of incompetence but because - like EVERY large company - they have bureucracy problems; they are too big and too dumb to do things right.

Until AMD starts whoppin' they' ass, at which point they will pull another C2D miracle out of their ass. By which i mean "out of the R&D department".

TLDR AMD can put out a better cpu than Intel; they cant "beat" Intel.
Came here to post my thoughts but @DigDog expressed them perfectly
 

HisEvilness

Member
Mar 23, 2019
34
2
16
www.hisevilness.com
Will depend if the windows scheduler gets a proper multicore update so it can use more cores and if stacking ever will become a thing where they can actually keep things cool in a small as possible package. As well as having each manu node being efficient smaller node does not per se mean they are better.
 

Shmee

Memory and Storage, Graphics Cards
Super Moderator
Sep 13, 2008
4,510
578
126
I would say that the 9700k should be mentioned as the chip to get, not the 9900k. I do suspect that AMD will take the lead though, either with Zen 2 or soon after.
 

NTMBK

Diamond Member
Nov 14, 2011
8,794
1,828
136
Who knows, maybe Intel will make a comeback with a kickass 7nm CPU.

(TSMC 7nm, that is.)
 
  • Like
Reactions: guachi

PotatoWithEarsOnSide

Senior member
Feb 23, 2017
664
700
106
If TSMC barely have the capacity for AMD right now (along with Apple and Huawei), how on Earth would you suggest that they'd also be able to supply Intel? Bear in mind that even AMD's needs are going to be increasing with next-gen consoles from Sony and Microsoft to come on 7nm too.
 
  • Like
Reactions: Tlh97

ASK THE COMMUNITY

TRENDING THREADS