Nvidia vs AMD's Driver Approach in DX11

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Deep breaths......Who said that frequency and bandwidth are same? What changes when you get faster memory? Bandwidth and latency. Latency changes plateau after a certain point, but bandwidth continues to increase as long as the memory controller isn't saturated. Changing the memory frequency directly affects memory bandwidth. This is why it is said that you should get faster memory when your applications are memory bound - depending on the application it may be sensitive to bandwidth or latency, or both.

You never explicitly stated it, but you kind of implied it when you said that:

It is laughable that you show Fallout 4 to claim that bigger L3 on the 5960X is what puts it on top when it is known that Fallout 4 loves memory bandwidth.

Show me one benchmark which shows Fallout 4 benefiting from increased bandwidth. All the benchmarks show Fallout 4 benefiting from memory speed!

X99 has an inherent advantage here because quad-channel support means that even with lower frequency memory, you can get more bandwidth from it compared to a dual-channel setup. This is what I mean when I say that Fallout 4 loves memory bandwidth - because ultimately this is what you are changing by getting faster RAM or moving to X99. I suspect that it may be sensitive to latency as well.

You have no idea whether Fallout 4 loves memory bandwidth, that's just an assumption. No game that I know of, including Fallout 4, responds to the huge amounts of memory bandwidth afforded by the X99 platform. On the other hand, it responds to memory speed, which is measured in nanoseconds and rated in cycles. This is proven over and over again by many benchmarks.

Also, the 5960x's 20MB L3 cache would reduce the game's dependence on faster RAM compared to CPUs with smaller amounts of L3 cache..
Faster memory implies more bandwidth, as long as you don't loosen the timings.

It does, but how do you know that it is benefiting from bandwidth and not from speed?
Oh, you still haven't proved how more than 8MB L3 affects performance, apart from repeating a textbook statement. I showed you graphs where 2,4,6,8 MB L3 had an effect, albeit with diminishing returns, proving that it matters more when you have a low amount of it to begin with. Do your part and show me something similar for larger L3.

I shouldn't have to prove anything. This should be common knowledge among computer enthusiasts. More L3 cache reduces the dependence on system memory performance for a CPU, because it means that the CPU can keep more data locally than having to go all the way to system memory which takes time. Also, the bandwidth and latency afforded by the L3 cache is superior to any system memory architecture, regardless of how fast it is.

And there are no benchmarks showing the effect of L3 cache size, because there is no way to disable the L3 cache, or parts of it.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
You have no idea whether Fallout 4 loves memory bandwidth, that's just an assumption. No game that I know of, including Fallout 4, responds to the huge amounts of memory bandwidth afforded by the X99 platform. On the other hand, it responds to memory speed, which is measured in nanoseconds and rated in cycles. This is proven over and over again by many benchmarks.
For the last time, I'm talking about the inherently large bandwidth of X99 for a reason- I'm saying that increasing memory speed increases its bandwidth, and this ultimately the metric that determines performance(because changes in latency due to increasing memory speed tapers off after a certain point, but peak theoretical bandwidth continues to increase).

An analogy would be fan RPM vs temperature, the rotational speed of the fan isn't what affects temperature of the component it is supposed to cool - it is the increased airflow(volume/time) caused by the higher rotational speed that contributes to lowering the temperature.

Show me one benchmark which shows Fallout 4 benefiting from increased bandwidth. All the benchmarks show Fallout 4 benefiting from memory speed!

Here is the 6700K with DDR4-2400. Focus on the result at 3.5GHz:

CPU_02.png


Now in this graph have a look at how the i7 4960X does:

CPU_01.png


So with a 0.1GHz deficit but much higher IPC and faster DDR4(I make a guess that the DDR3 that the 4960X is running is of lower frequency), why is it that the 6700K@3.5 GHz and the 4960X@3.6GHz are almost identical? The Ivy-Bridge E also has a bigger L3 cache as well, so what gives? The only factor that differs between the two is the quad-channel DDR3 which gives a bandwidth advantage over dual-channel DDR4. How else do you explain this result? Fallout 4 cares mostly about single-threaded performance(doesn't scale beyond 4C/8T) and as this shows, memory bandwidth is what ultimately decides the difference between the i7 6700K and the i7 4960X.

I shouldn't have to prove anything. This should be common knowledge among computer enthusiasts. More L3 cache reduces the dependence on system memory performance for a CPU, because it means that the CPU can keep more data locally than having to go all the way to system memory which takes time. Also, the bandwidth and latency afforded by the L3 cache is superior to any system memory architecture, regardless of how fast it is.

And there are no benchmarks showing the effect of L3 cache size, because there is no way to disable the L3 cache, or parts of it.
In other words, a cop-out. You keep parroting that line claiming it to be common knowledge when it hasn't been put to the test. Give a knock on my door when someone tests this thing against a regular quad-core i7 to really see how much of a difference large L3s make in games.
 
  • Like
Reactions: Bacon1

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
For the last time, I'm talking about the inherently large bandwidth of X99 for a reason- I'm saying that increasing memory speed increases its bandwidth, and this ultimately the metric that determines performance(because changes in latency due to increasing memory speed tapers off after a certain point, but peak theoretical bandwidth continues to increase).

At what point does improvement to latency taper off? So basically you're telling me that a CPU will continue to increase in performance indefinitely, provided that the bandwidth also increases?

Why don't you actually buy a high bandwidth platform before you spout such nonsense? The amount of programs that actually can use that bandwidth, is nearly non existent for consumer applications. Off the top of my head, only compression software like 7zip and WinRAR benefit from high levels of bandwidth. Most desktop applications, games included, benefit from latency reductions more so, because modern CPUs have fairly large amounts of L3 cache which reduces the need for high levels of system memory bandwidth.

Here is the 6700K with DDR4-2400. Focus on the result at 3.5GHz:

CPU_02.png

Hmm, I ask you to give me a benchmark showing Fallout 4 improving with bandwidth, and you give me a clock speed scaling chart. o_O

So with a 0.1GHz deficit but much higher IPC and faster DDR4(I make a guess that the DDR3 that the 4960X is running is of lower frequency), why is it that the 6700K@3.5 GHz and the 4960X@3.6GHz are almost identical? The Ivy-Bridge E also has a bigger L3 cache as well, so what gives? The only factor that differs between the two is the quad-channel DDR3 which gives a bandwidth advantage over dual-channel DDR4. How else do you explain this result? Fallout 4 cares mostly about single-threaded performance(doesn't scale beyond 4C/8T) and as this shows, memory bandwidth is what ultimately decides the difference between the i7 6700K and the i7 4960X.

I think you have it backwards. Clock speed, microarchitecture and cache size all clearly matter to some degree. The 4960x has nearly twice the L3 cache as the 6700K, but is clocked 400mhz slower and is on an older architecture. The 5960x has the largest amount of L3 cache and isn't that far behind the 4960x, but is the slowest clocked CPU of the lot. The 5960x is only 7% slower than the 6700K, despite having a 1ghz deficit, or 33% less clock speed.

If that doesn't show what I'm saying, then nothing will.

In other words, a cop-out. You keep parroting that line claiming it to be common knowledge when it hasn't been put to the test. Give a knock on my door when someone tests this thing against a regular quad-core i7 to really see how much of a difference large L3s make in games.

Great logic, assuming L3 cache will continue to scale indefinitely :D
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Hmm, I ask you to give me a benchmark showing Fallout 4 improving with bandwidth, and you give me a clock speed scaling chart. o_O
Somebody has reading comprehension issues.:rolleyes:
At what point does improvement to latency taper off? So basically you're telling me that a CPU will continue to increase in performance indefinitely, provided that the bandwidth also increases?
I said theoretical bandwidth is what increases with frequency; and no, what you 'think' I'm implying is wrong.
I think you have it backwards. Clock speed, microarchitecture and cache size all clearly matter to some degree. The 4960x has nearly twice the L3 cache as the 6700K, but is clocked 400mhz slower and is on an older architecture. The 5960x has the largest amount of L3 cache and isn't that far behind the 4960x, but is the slowest clocked CPU of the lot. The 5960x is only 7% slower than the 6700K, despite having a 1ghz deficit, or 33% less clock speed.

If that doesn't show what I'm saying, then nothing will.
Here, let me make it simpler for you:

Why does the 4960X, with a slower architecture, slower memory frequency and almost the same clock speeds(3.6 vs 3.5) as the 6700K which is the faster architecture and with faster DDR4 perform at virtually the same levels? What is the difference between these two platforms?
Great logic, assuming L3 cache will continue to scale indefinitely :D
Who was claiming that larger L3s affect gaming performance in the first place?