• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Only one of your cores was talking to the GPU

ehume

Golden Member
CEO of Stardock, Brad Wardell, recently said this:
for the past several years, only one of your cores was talking to the GPU, and no one wants to go 'You know by the way, you know that multi-core GPU? It was useless for your games.'. . . People wonder, saying 'Gosh, doesn't it seem like PC games have stalled? I wonder why that is?' Well, the speed of a single core on a computer has not changed in years. It's been at 3GHz, or 2-something GHz for years, I mean that's not the only thing that affects the speed, but you get the idea.
He was touting DX12. Reference.
 
yup, been a big problem for a while now, not so apparent on 60Hz monitors, but when you try to push 120+Hz it isn't long before you realize just how much raw single threaded CPU power you need
 
To paraphrase Inago Montoya, I do not think multithreading means what he thinks it means. Framerates are not limited by CPU last I checked, particularly at the resolutions we demand these days. Geometry is not so much calculated by the CPU as it was in the 90s. Correct me if I'm wrong but I'm thinking we will see a larger performance jump from HBM than multithreaded "CPU-GPU communication."
 
To paraphrase Inago Montoya, I do not think multithreading means what he thinks it means. Framerates are not limited by CPU last I checked, particularly at the resolutions we demand these days. Geometry is not so much calculated by the CPU as it was in the 90s. Correct me if I'm wrong but I'm thinking we will see a larger performance jump from HBM than multithreaded "CPU-GPU communication."

Framerates can be limited by the CPU, because it's the CPU that tells the GPU what to draw. And since GPUs are so fast these days, communicating with them via a single core definitely impacts performance in a negative way.

Of course increasing the GPU burden (ie resolution, AA, eye candy etcetera) will offset this, as the GPU is taking longer to process that data. But the more powerful the GPU is, the faster it consumes the data and a weak CPU can struggle to keep up..

Thats why there's such a massive gap in performance between Intel's Core i7 series and AMD CPUs in games that are single threaded.. In games that are more multithreaded, there's still a large gap but it's not as prominent..

Case in point:

http--www.gamegpu.ru-images-stories-Test_GPU-strategy-Total_War_ATTILA-test-attila_proz.jpg
 
And since GPUs are so fast these days, communicating with them via a single core definitely impacts performance in a negative way.
It's not a core,it never was a core,it's communicating with them via a single thread that usually runs at around 10%, if even that,of a single core.

Mantle/dx12 will allow more threads to talk to the gpu at once,so you will be able to show more stuff at once,sure these threads might run on separate cores but this will not be mandatory.

This CEO definitely needs to hire a tech guy to explain some things to him.

"Well, the speed of a single core on a computer has not changed in years. It's been at 3GHz, or 2-something GHz for years. "
Dude seriously???
 
This CEO definitely needs to hire a tech guy to explain some things to him.

"Well, the speed of a single core on a computer has not changed in years. It's been at 3GHz, or 2-something GHz for years. "
Dude seriously???

To be sure he is referring to the hardware that the middle-80% of his potential TAM currently sits. It is a comment that is likely drawing from market research results, such as Steam surveys, and speaks to the market he needs to target games to sell into.

If you sell a game today, what percentage of your customers do you think are running <3GHz processors, what percentage are running <4GHz processors, what percentage are running >4GHz processors?
 
It's not a core,it never was a core,it's communicating with them via a single thread that usually runs at around 10%, if even that,of a single core.

Mantle/dx12 will allow more threads to talk to the gpu at once,so you will be able to show more stuff at once,sure these threads might run on separate cores but this will not be mandatory.

This CEO definitely needs to hire a tech guy to explain some things to him.

"Well, the speed of a single core on a computer has not changed in years. It's been at 3GHz, or 2-something GHz for years. "
Dude seriously???

Funnily I'm pretty sure he writes a pretty significant chunk of his studio's main games' AI code.
 
If you sell a game today, what percentage of your customers do you think are running <3GHz processors, what percentage are running <4GHz processors, what percentage are running >4GHz processors?
I think (maybe?) TheELF was questioning why the guy was measuring how good a CPU is by GHz (which might have worked before the Pentium 4 era).
 
He is simplify things on purpose tremendously to make a point to the non technical audience. Now it can be dangerous to simplify but his point still stands that we only have roughly a doubling of cpu performance since 2007 single thread, yet our gpus have far outspent that.

Now passmark is a limited benchmark but I am using it for it is useful enough to get a general idea and it easy to search.




0887-AMD Phenom 1 9950 Quad @2.6 ghz
0863-AMD A8 3800 APU@ 2.4 to 2.7 ghz
0808-AMD 5350 Quad@ 2.1 ghz
0922-AMD Phenom II x4 810@2.6 ghz
1188-AMD FX 6100 Six Core@3.6 ghz turbo to 3.9
1545-AMD FX 8370 Octo Core@4.0 ghz turbo to 4.3 (pretty much AMD best cpu in a sane tdp)

0922-Core 2 Quad @ 2.40 ghz Single Thread
1132-Core i5 750@2.66 ghz turbo to 3.2 ghz
1833-Core i5 4430@3.0 ghz turbo to 3.2 ghz


So as you can see most desktop cpus that are recent are in the 800 to 1200 range. Some new intel quads are faster than the 1200 range but remember we are also going to laptop cpus so while you may get higher ipc you see a reduction in performance. For comparison an i7 3770 single thread performance is 2069 and it turbos to 3.9 ghz yet the haswell i7 4510u turbos to 3.1 ghz but gets 1686.

And remember that most people that play games are not running i7s but much lower cpus.
 
but remember we are also going to laptop cpus so while you may get higher ipc you see a reduction in performance. For comparison an i7 3770 single thread performance is 2069 and it turbos to 3.9 ghz yet the haswell i7 4510u turbos to 3.1 ghz but gets 1686.
Not sure what you are saying here. Generally laptops with the higher end cards such as 870m, 880m, 970m and 980m come with quad processors. Even my lowly laptop with old 765m has a quad processor. For the mobile dual core i5's such as BDW they generally come with lower spec'd cards such as 820m to 850m.
 
To paraphrase Inago Montoya, I do not think multithreading means what he thinks it means. Framerates are not limited by CPU last I checked, particularly at the resolutions we demand these days. Geometry is not so much calculated by the CPU as it was in the 90s. Correct me if I'm wrong but I'm thinking we will see a larger performance jump from HBM than multithreaded "CPU-GPU communication."
Bottlenecks change from frame to frame. Although a game might be 90% GPU bound (arbitrary number for example purposes), it'll still be CPU bound for 9% of those frames, and 1% by memory. Thus, increasing CPU performance would have benefit 9% of the frames in the scenario I presented.

Closer to reality, most AAA titles are going to be bound by the CPU >10% of the time. Additionally, I'd posit that most of the latency spikes/frame rate drops we notice are when the CPU is being hammered.

Regardless, bolstering the CPU's capability in frame rendering is of huge benefit. A more powerful CPU lets you run more intensive calculations that were previously too slow to run. Things like DX12 also boost power efficiency a considerable amount.
 
Bottlenecks change from frame to frame. Although a game might be 90% GPU bound (arbitrary number for example purposes), it'll still be CPU bound for 9% of those frames, and 1% by memory. Thus, increasing CPU performance would have benefit 9% of the frames in the scenario I presented.

Closer to reality, most AAA titles are going to be bound by the CPU >10% of the time. Additionally, I'd posit that most of the latency spikes/frame rate drops we notice are when the CPU is being hammered.

Regardless, bolstering the CPU's capability in frame rendering is of huge benefit. A more powerful CPU lets you run more intensive calculations that were previously too slow to run. Things like DX12 also boost power efficiency a considerable amount.
I have noticed an overarching trend with regards to frame rate minimums, in which both SLI/Crossfire setups and CPUs with weak ST performance tend to fall short relative to their maximums.
 
I agree. In 2008, with the first i7, gamers had access to quad cores with hypertheading for the first time. Only now, 7 years later, with dx12 and similiar apis, can they really be fully utilized in a gaming environment.
 
Last edited:
It's interesting to go back years ago and read articles about multithreading and the early dual cores and then the first quad cores. I read one last night about the dual cpu AMD Dual FX or whatever they called it from anandtech that had a lot of interesting things to say about multithreading. It was from 2006 I think, and it was a lot of the things we're still saying today. It was very hopeful, the attitude toward multi core and multi thread.
 
Isn't this pretty resolution dependent? 290X with Mantle on/off only managed about 4% more performance at 4K in recent Anandtech reviews.

DX12 should make CPU bottlenecks a lot better for scenes with a lot of units, but a lot of games these days are essentially dolled up corridor shooters with lots of GPU pushing effects I can't see much gain from DX12 in these kind of games.
 
Isn't this pretty resolution dependent? 290X with Mantle on/off only managed about 4% more performance at 4K in recent Anandtech reviews.

DX12 should make CPU bottlenecks a lot better for scenes with a lot of units, but a lot of games these days are essentially dolled up corridor shooters with lots of GPU pushing effects I can't see much gain from DX12 in these kind of games.
It's not only about gains for current systems, but also about how new gaming systems can be built.

A simple way to put it is DX12 will significantly cut CPU requirements and/or CPU power usage. For desktops that means lower CPU budget and increased GPU spending, for notebooks that equals to more TDP available for the GPU (and potentially bigger GPU)

In both situations, building a new system with DX12 in mind will bring a significant jump in performance due to changes in power/budget balance.
 
Its quite certain we are way over in the hyperbole. DX affects draw calls, not regular game related CPU load. The more simplified game the bigger the benefit. And starswarm isnt exactly a game. And on the other front, the more complex game, the less benefit.
 
Last edited:
Could you give a few examples of complex games that would see less benefit?

The more calculations going on that isnt graphics related. For example AI, world simulation and the like. Total War series, Cities Skylines, Starcraft, Supreme Commander, Civilization and so on.

I dont really see DX12 giving us better games in terms of gameplay. Just prettier games. But again, thats also the sole purpose of the API.
 
Last edited:
Back
Top