ATI's next effort to be like the XboX 360's gpu

tuteja1986

Diamond Member
Jun 1, 2005
3,676
0
0
NVIDIA wanting to future advance in the current architecture while ATI is on with the new tech:! now spec wise i think ATI will have the overall advantage but performance wise who knows :! I just hope the unified shader architecture is worth it and on board E-DRAM chip should totally make performance with AA really awesome in future game that take advantage on the onboard chip. Currently I am more excited about R6XX than the GX since Nvidia isn?t introducing anything fancy if xbit article is correct.

AVIVO + Full DX 10 Support + SM 4.0 + unified shader architecture + on-board E-DRAM chip = AWESOMENESS

Edit :

AVIVO + Full DX 10 Support + SM 4.0 + unified shader architecture = AWESOME

 

Drayvn

Golden Member
Jun 23, 2004
1,008
0
0
Cheers, every tidbit of info helps.

Me want it very badly.

Got a 7800GTX, ill upgrade when that and the G80 comes around. Every new GPU (not refresh) i always upgrade.

And if that is all correct. A lot of companies will have experience using the hardware, and if quite a bit of it makes its way into the R600 then thats great for us, as we might be able to get quality graphics straight from the get go. And some devs would be on a second run and so have learnt and optimized graphics engines and what not to run better. So better overall performance.

So the Xenos was a guinea pig for us PC users :p
 
Jun 14, 2003
10,442
0
0
Originally posted by: tuteja1986
NVIDIA wanting to future advance in the current architecture while ATI is on with the new tech:! now spec wise i think ATI will have the overall advantage but performance wise who knows :! I just a a unified shader architecture is worth it and on board E-DRAM chip should totally make performance with AA really awesome in future game that take advantage on the onboard chip. Currently I am more excited about R6XX than the GX since Nvidia isn?t introducing anything fancy if xbit article is correct.

AVIVO + Full DX 10 Support + SM 4.0 + unified shader architecture + on-board E-DRAM chip = AWESOMENESS


apparently the reason for the EDRAM being dropped is to do with the API we use on the PC. that would need to be recoded to allow the API to use it
 

Drayvn

Golden Member
Jun 23, 2004
1,008
0
0
Originally posted by: otispunkmeyer
Originally posted by: tuteja1986
NVIDIA wanting to future advance in the current architecture while ATI is on with the new tech:! now spec wise i think ATI will have the overall advantage but performance wise who knows :! I just a a unified shader architecture is worth it and on board E-DRAM chip should totally make performance with AA really awesome in future game that take advantage on the onboard chip. Currently I am more excited about R6XX than the GX since Nvidia isn?t introducing anything fancy if xbit article is correct.

AVIVO + Full DX 10 Support + SM 4.0 + unified shader architecture + on-board E-DRAM chip = AWESOMENESS


apparently the reason for the EDRAM being dropped is to do with the API we use on the PC. that would need to be recoded to allow the API to use it

I thought it also with hardware constraints and the fact it cost so much. Well in terms of hardware i thought because there are much higher resolutions PC users use is that the EDRam required to support all of it to give free AA would be quite big so the costs would be very high, too high for even SLI and Xfire users.

 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
IMO, the reason EDRAM exists on the xbox360 is because the rest of the mem is a much slower, shared system ram, and would be a bottleneck in some cases. Since a PC video card has dedicated onboard mem, and a fast one at that, there would be less benefit from the EDRAM on the PC than on the console, and would cost too much for whatever performance gain it offers.
 

dunno99

Member
Jul 15, 2005
145
0
0
Actually, the performance that could be gained from the EDRAM would be quite substantial. An example of the difference the EDRAM would make would be something like having CPU with 2 MB of L2 cache (even though it can't fit a whole program) vs no cache at all, even if both run off of a 1.6GHz FSB/dedicated memory lanes on DDR3 (hypothetical processors, and I understand the latency issue). You can immediately see how much more a relatively small piece of very, very fast memory can affect overall system performance. Basically, the EDRAM would make an otherwise memory bandwidth-limited game get virtually free bloom, AA, HDR, post-processing effects, et al. along with faster intermediate rendering stages (such as depth preprocessing).

But as an earlier poster pointed out, the main problem with supporting the EDRAM is because none of the popular APIs on the PC (DX, OGL) currently can separately deal with a specialized piece of RAM, because the memory space is abstracted out. This is probably the main reason why it was taken out, since all existing games wouldn't be able to deal with it. (although, if they segmentated the screen framebuffer directly onto the EDRAM and away from things like textures, off-screen buffers and FBOs, then it wouldn't be a problem at all...but I suppose the benefits are too limited to be of value)