• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

GDDR-III

We'll have to see where this goes. Honestly, I'm smelling VRAM, SGRAM, and all the other "graphics" memory technologies that never took off.
 
Terry Lee, executive director of advanced technology and strategic marketing at Micron, said the Boise, Idaho, company will start producing GDDR-III in the second quarter of 2003.

Hehe, I live in Idaho.

The initial GDDR-III chip will be 256Mbits in density with a 500MHz clock and a 1Gbit/s data rate. The clock is scalable to 750MHz, yielding a 1.5Gbit/s data rate, Litt said.

What are the DDR-II specs?
 
Originally posted by: ViRGE
We'll have to see where this goes. Honestly, I'm smelling VRAM, SGRAM, and all the other "graphics" memory technologies that never took off.
Were VRAM and SGRAM a completely new form of memory or just an improvement on existing memory? If I recall correctly, wasn't/isn't SGRAM very similar to SDRAM? It just made a few improvements that yielded a few percent in graphical applications. I could be wrong though.

You can't really argue with something like this though...it's hard to say that a new memory type with a bandwidth of 1.5Gbit/s is not going to be very beneficial to the bandwidth hungry GPUs of today and tomorrow. If it was simply DDRII or DDR but modified or something, I would probably agree with you.
 
We are probably going to see this memory at SOME point with NVidia and ATI pushing for it. It's a very logical move just in the fact that ATI and NVidia have to jump to different memory every time they release a new card. With these new chips ATI and NVidia would no longer have to decide on what memory to use, how much they are going to need, who they are going to get it from and if there will be enough memory that is fast enough and in large enough volume when needed. Those few facts alone should serve well in cutting costs and will also give the two companies more time to focus on the optimization and efficiency of their GPU's. I would think that with memory out of the picture as a limiting factor in speed increase and bandwidth, ATI and NVidia could really start innovating.
 
Whoever has the capital to spend will horde the supply at first. This spells trouble for one of them if its that big of difference over DDR||.
 
actually, such an advance in memory might hinder innovation...why develop occlusion culling (hyper z) methods, memory crossbar architectures, tile-rendering, video data compression schemes, etc. when u have all the bandwidth u need?

ok, i'm sure this won't provide all the bandwidth they could ever want, but still...u catch my drift...i think, tho, that they'll simply hit another wall this memory and require the need to innovate bandwidth saving measures yet again until the next memory bottleneck is removed.
 
Originally posted by: ViRGE
We'll have to see where this goes. Honestly, I'm smelling VRAM, SGRAM, and all the other "graphics" memory technologies that never took off.


Actually SGRAM was widely used in Geforce and Geforce 2 based cards, now vram on the other hand never really took off.

 
Back
Top