nVidia's next gen cards...

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: toslat
Originally posted by: keysplayr2003
Remember, ATI and Nvidia each have their features over the other. It is not one sided here. ATI has DX10.1 and Nvidia has onboard Physx. The 4870 has GDDR5, The GT200's have wider buses. Back and forth all day long. Exactly how valuable each of these features are will become known over the following year in the form of released titles that support one, the other, or both.

How is wider buses a feature? AFAIK its more of a cost vs memory bandwidth issue.

When the transformation from G80 to G92 took place, there was a HUGE amount of complaints from users stating they feel it was a step backwards. "Watered down" is what they called G92. When all G92 was intended for was to bring a similar amount of performance (maybe a little better) for less cost. A lot of people were thrilled with this, but many others were not.

All those who were not, wanted to see an increase of everything over a 8800GTX. Not a decrease. Enthusiasts wanted:

384-bit wide bus or greater, and more than 512MB of RAM, more shaders etc., on a single GPU. More of everything. Nvidia listened, and brought the GT200's. They listened and gave what users wanted. They didn't seem to care about the extra costs to purchase a card like this. Wider bus offers more bandwidth without having the memory clocked to the moon. In this case, GDDR3 is most likely cheaper than GDDR5.

Some people think a wider bus is a plus, and some, like yourself, might believe that faster more expensive memory on a thinner bus is the answer. Who is right? Debatable.
 

toslat

Senior member
Jul 26, 2007
216
0
76
Originally posted by: keysplayr2003
Originally posted by: toslat
How is wider buses a feature? AFAIK its more of a cost vs memory bandwidth issue.
When the transformation from G80 to G92 took place, there was a HUGE amount of complaints from users stating they feel it was a step backwards. "Watered down" is what they called G92. When all G92 was intended for was to bring a similar amount of performance (maybe a little better) for less cost. A lot of people were thrilled with this, but many others were not.

All those who were not, wanted to see an increase of everything over a 8800GTX. Not a decrease. Enthusiasts wanted:

384-bit wide bus or greater, and more than 512MB of RAM, more shaders etc., on a single GPU. More of everything. Nvidia listened, and brought the GT200's. They listened and gave what users wanted. They didn't seem to care about the extra costs to purchase a card like this. Wider bus offers more bandwidth without having the memory clocked to the moon. In this case, GDDR3 is most likely cheaper than GDDR5.

Some people think a wider bus is a plus, and some, like yourself, might believe that faster more expensive memory on a thinner bus is the answer. Who is right? Debatable.
My point was about terming bus width as a feature, not on if its advantageous or not. I will classify bus width, along with things like clocks, as implementation details.
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
How can you side a wider bus as a plus? Memory bandwith is all that matters, and thats the product of memory bus and the memory speed. If you can reduce costs by using gddr5 though, as opposed to having a more expensive card with a wider memory bus, then a wider bus is a negative, because you could get a cheaper solution with the same performance.

And I very much doubt nvidia 'listened' to consumers. Consumers are nowhere in the same league as nvidia's engineers. Those engineers do what they think is best, they only care for the fastest, cheapest to make product. This round I suppose nvidia's engineers either didn't have a whole lot of budget to work with, and nvidia bosses thought a larger die would, with more of everything, would prove a big enough improvement over g80/g92, or nvidia's engineers just flat out failed, and ran out of genius idea's. Nvidia did NOT listen to consumers, who said, I want a wider memory bus, I want more shaders etc. In fact, enthusiasts didn't want all those things. They wanted more memory bandwith, they don't care how Nvidia's pulls it of, as long as they do it. They don't care for more shaders, they care for more shaderpower, so either more shaders or more advanced shaders or higher clocked shaders, as long as they get the job done. Also, enthusiasts make up a small part of the market, IF nvidia listened to enthusiasts then it's just more epic fail from them and the whole director's board should be shot. It's just nvidia's strategy to produce something blazingly fast then tune it down for mainstream. Nothing to do with enthusiasts.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Cookie Monster
Originally posted by: BFG10K
I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.

Im wondering if the last bit is true. nVIDIA's current architecture has memory channels tied with ROPs. One 32bit memory channel are tied to 2 ROPs. So in order for them to go 256bit + GDDR5, it would require alot of reshuffling of the current architecture to maintain the no of ROPs, unless they want it cut back to 16.

Yep this would be my concern with a change in bus width on NV parts. On every part since G80, a bus width change has always resulted in a change in memory controllers/ROPs as well. From the die shots it looks like these clusters are linked and cannot be severed independently from the ROPs.

ATI has been able to mix and match bus width with a centralized memory controller and have changed as early as R600 to RV670 where the external bus was shrunk from 512-bit to 256-bit without any transistor reduction.

If NV could manage to reduce memory controllers and keep ROPs the same I think it'd be a good move, otherwise I'd be concerned about a situation similar to G92 where some improvements were made at the expense of other areas resulting in no net performance gain.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: chizow
Originally posted by: Cookie Monster
Originally posted by: BFG10K
I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.

Im wondering if the last bit is true. nVIDIA's current architecture has memory channels tied with ROPs. One 32bit memory channel are tied to 2 ROPs. So in order for them to go 256bit + GDDR5, it would require alot of reshuffling of the current architecture to maintain the no of ROPs, unless they want it cut back to 16.

Yep this would be my concern with a change in bus width on NV parts. On every part since G80, a bus width change has always resulted in a change in memory controllers/ROPs as well. From the die shots it looks like these clusters are linked and cannot be severed independently from the ROPs.

ATI has been able to mix and match bus width with a centralized memory controller and have changed as early as R600 to RV670 where the external bus was shrunk from 512-bit to 256-bit without any transistor reduction.

If NV could manage to reduce memory controllers and keep ROPs the same I think it'd be a good move, otherwise I'd be concerned about a situation similar to G92 where some improvements were made at the expense of other areas resulting in no net performance gain.

I'd just as soon like to see the current configuration shrunk to 55nm with GDDR5 at 3.6GHz.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Originally posted by: keysplayr2003
Originally posted by: chizow
Originally posted by: Cookie Monster
Originally posted by: BFG10K
I think we'll see a GTX280 shrunk to 55 nm with 256 bit GDDR5.

Im wondering if the last bit is true. nVIDIA's current architecture has memory channels tied with ROPs. One 32bit memory channel are tied to 2 ROPs. So in order for them to go 256bit + GDDR5, it would require alot of reshuffling of the current architecture to maintain the no of ROPs, unless they want it cut back to 16.

Yep this would be my concern with a change in bus width on NV parts. On every part since G80, a bus width change has always resulted in a change in memory controllers/ROPs as well. From the die shots it looks like these clusters are linked and cannot be severed independently from the ROPs.

ATI has been able to mix and match bus width with a centralized memory controller and have changed as early as R600 to RV670 where the external bus was shrunk from 512-bit to 256-bit without any transistor reduction.

If NV could manage to reduce memory controllers and keep ROPs the same I think it'd be a good move, otherwise I'd be concerned about a situation similar to G92 where some improvements were made at the expense of other areas resulting in no net performance gain.

I'd just as soon like to see the current configuration shrunk to 55nm with GDDR5 at 3.6GHz.

You mean 512-bit with GDDR5?
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
448 and 512. Yes.
Shrinking the current 65nm GT200 core to 55nm would reduce costs to make them. Going with more expensive GDDR5 would negate that cost savings to consumers. So, you would end up with a card that would probably cost the same as current models, only cooler, less power hungry, and a magnitude more bandwidth. With higher core clocks on the GPU most likely.

How does that sound?
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Janooo
Originally posted by: keysplayr2003
448 and 512. Yes.

Would it be beneficial? Is 280 bw starved?

Probably not, I think Keys' point was that it was a surefire way to avoid any potential problems with trimming bus width on NV architecture. Personally I'd like to see a need for more bandwidth before paying for it but its certainly more desirable than possibly having to cut memory controllers/ROPs and negating any clock speed improvements from going to 55nm.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Originally posted by: keysplayr2003
448 and 512. Yes.
Shrinking the current 65nm GT200 core to 55nm would reduce costs to make them. Going with more expensive GDDR5 would negate that cost savings to consumers. So, you would end up with a card that would probably cost the same as current models, only cooler, less power hungry, and a magnitude more bandwidth. With higher core clocks on the GPU most likely.

How does that sound?

GDDR5 power savings would help. Anybody knows the price difference between GDDR3 and GDDR5?

 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
I would expect that current refreshes shrink to a 256 bit bus. Next gen I hope is dx11.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
The more it becomes available, and mainstream, the lower the price will go of course.
The closer it's cost gets to current GDDR3 memory, the better. More will adopt it's use.
I suspect we might have seen 1GB 4870's had the cost of GDDR5 been lower.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,978
126
Originally posted by: BenSkywalker
Unless we see a sharp spike in the cost of GDDR5, then it would end up being a costly mistake.
It's only a matter of time; at the moment only ATi?s using it and only Samsung make it so it?s scarce. Once others get into the fray I?d expect it to become as plentiful as GDDR3 currently is.

Originally posted by: keysplayr2003
I suspect we might have seen 1GB 4870's had the cost of GDDR5 been lower.
I think it's coming at the end of the month. Well it better be because I'm waiting for it. :p

Also the 4870 X2 is supposed to have 2 GB of it.
 

tvdang7

Platinum Member
Jun 4, 2005
2,242
5
81
probably nothing next generation about it. just another 2900xt ---> 3870 transition. die shink and added 10.1 support.