• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Geforce3 (NV20) uses L1 cache!!!

Don't video cards already have some amount of texture cache? What will this "new" cache be caching?
 
If the rumors are true, this will be more like a L2 cache.

The cache is supposed to work as a general purpose buffer between the chip or texture cache and the "slow" on board RAM of the gfx card. This is based only on rumors that have been floating around. Amount and speed are all over the place along with type, some are saying eDRAM and others are saying SRAM, some are saying full clock speed, others are saying full clock DDR(double the clock) and some are saying full clock QDR(quadruple the clock). So much speculation out there and nothing solid from nVidia to indicate that any of it is right.
 
Oh ok...does cache really work in a graphics chip application though? I was under the impression that graphics rendering is an insanely parallel process with largely independant operations, so I'm wondering if there'll be enough spatial and temporal locality in memory accesses for caching to be worth the effort...
 
oops, double post...the forums is really screwy now, it works on IE but would give Netscape an error...oh well
 
It`s supposed to mean DDR.



<< Der normale Grafikspeicher wird wahrscheinlich aus 32 bis 64 MB DDR-RAM-Modulen bestehen. >>



Is the original, they`l be using DDR memmory.
 
With a good L1 cache we will see a huge improvement like K6-2 VS K6-III
On the other side ramspeed will become less important!
 
Goi-

&quot;Oh ok...does cache really work in a graphics chip application though? I was under the impression that graphics rendering is an insanely parallel process with largely independant operations, so I'm wondering if there'll be enough spatial and temporal locality in memory accesses for caching to be worth the effort...&quot;

Perhaps this will be utilized for HSR tasks. With the NV20 being a completely new architecture it is possible that nV is going to design the T&amp;L unit from the ground up to calculate overdraw using Z-Buffer based calculations which would need to be handled repeatedly. This is of course pure speculation, but if that were the case then a L2 style buffer would make a lot of sense, the savings in bandwith to onboard memory by eliminating overdraw should be on average over 50% making even the current 6ns DDR a viable option for significantly increasing performance.

I have been trying to figure out how they were going to implement HSR without a large hit on the CPU or by taking a decent chunk of bandwith for read/writes to local memory. The psuedo L2 cache makes a lot of sense to me here(but memory technology isn't my thing so please feel free anyone to shoot down/point out errors in my theory).
 
lets see... the ati radeon has more transistors than anything but a thunderbird, the gts has around as many as a coppermine. i'm going to guess that is far too many transistors to be all logic. it stands to reason theres already a large amount of cache in those chips.
 
Back
Top