SamurAchzar
Platinum Member
- Feb 15, 2006
- 2,422
- 3
- 76
Originally posted by: kobymu
Originally posted by: BFG10K
That's my point - for a GPU to work at today's performance levels it needs fast dedicated RAM and lots of it.
If you want to move the GPU onto the CPU's die you need to deal with this fact.
The 'dedicated' part is misleading/redundant, and i'm pretty sure, although not absolutely certain, that moving the GPU onto the CPU's die will require additional bandwidth only, and duo to the parallel nature of modern GPUs, latency can even take a slight to moderate hit, GPUs need a feeding stream of high bandwidth memory. And it is also worth mentioning that modern CPU (Core2Duo and especially Athlon64), in the desktop environment, have bandwidth to spare in most applications (although not anywhere near the requirements of modern GPUs). But I generally agree with your point.
What i'm mostly curious about is how does the CPU cache fit into this, because allowing the level 2 cache contain 3D data can potentially have a awful effect on its effectiveness (really bad!), I don?t think that the 3D data from even a relatively small scene in a modern game can fit into 2/4MB, this is a second issue "you need to deal with". What i'm curious about is how, exactly, will CPU/GPU designers overcome this issue.
And we haven?t even touched the integration issue...
IMHO, GPU work currently doesn't require caching, as the data is streamlined and predictable. If you employ good pipelining, you can maintain 0 latency during normal work. You always know what to get in advance.
The problem starts when you don't know what your next data will be - then you have to fetch it on demand and encounter latency.