• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

tech question for gurus

morgash

Golden Member
been wonderin about some things and need some answers if anyone has em 🙂

1. Is the L1/L2/L3 cache on a law of diminishing returns as its size goes up as far as performance is concerned and if not why are the cache sizes not 512mb-1gb thereby allowing the riddance of sytem RAM and the whole FSB bottleneck.

2. Single channel channel RAM is 64-bits wide while dual channel is 128 bits wide. High end video card RAM is 256 bits wide, so are they running quad channel? if so then two questions come up.
a. since on videocards with 128-bit memory interfaces at a certain speed there is no performance gain afterwards because of the bottleneck at the 128bit, will this be the case eventually with DDR and will we see tri and quad channel RAM in the future?
b. and if the question above is true, does this mean that Sony/Rambus's new XDR memory for the PS3 runs on an even higher memory bitrate? its clocked somewhere around the 3-4 ghz range so i would think that even the might 256 bit would have bottlenecked b4 then.

THanks in advance for all answers, might think of more l8r cuz i feel like ive missed something. Hope this is informative to everyone cuz ive been wondering this for a while.

Morgash
 
the larger the transitor count, the larger each cpu die is, and this the lower the yield per wafer. So the more l2/l3 cache you have, the larger and more expensive the chip becomes to produce (and the yield plummets). Plus the latency goes up (look at the increased latency on the 6xx intel cpus vs the 5xxx).

mainly it would be insanely expensive, if it was even possible...

1)down 😉
 
thanks! thats what im lookin for. still need number 2 tho, anyone got the cahones to go for that one?
 
In regards to q1:

Caches operate on a law of diminishing returns from a sizing pov. Moreover, since cache latencies (i.e. amount of time from address present to data available) increase with cache size due to physical dimensions, there is a sweet spot for cache sizing, beyond which the total return (cache throughput) will actually decrease. Imagine the main memory as a cache, it will hit every time, but because its latency is so large, its total throughput is quite low compared to the CPU caches.

In addition, caches are quite expensive to add since they consume die real estate, therefore it is both economically unfeasible and detrimental to performance to have extremely large last-level-caches on die.
 
Back
Top