nVidia GT300 in October?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Just learning
Originally posted by: Astrallite
I'd be more interested in finding out what GT300 really is--will it be a single card?

I find it hard to believe it will be faster than a GTX295, which would make me wonder if nvidia will simply say sayanara--EOL to you mister 295!

This wouldn't suprise me at all. It always seems the next generations top card is faster than the previous generation's X2 versions.
I wouldn't expect GT300 to be significantly faster than current multi-GPU solutions given we just got i7 a few months ago and we're still seeing CPU bottlenecks for this generation of GPUs. A single-GPU may scale more efficiently, but I think it'll still cap out around the same performance level, ~1.5x until we see faster CPUs (perhaps Lynnfield with its integrated PCIE controller will add a bit of a boost). In the meantime, I'd expect GPU makers to push peripheral features like AA, SSAO, hardware physics as reasons to upgrade until more demanding titles are released.

Originally posted by: Wreckage
Charlie's just upset that NVIDIA will be the first to market with a DX11 card. While ATI may not follow till next year.
Maybe we'll get lucky and MS will add some other marginal feature in that timeframe. Then we'll get to hear about how awesome DX11.1 is for another 3 years. ;)
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: Wreckage
Charlie's just upset that NVIDIA will be the first to market with a DX11 card. While ATI may not follow till next year.

ATI will follow. With DX11.1 cards. Unless Nvidia comes out with DX11.1 themselves. Then ATI will come out with DX11.1c cards. LOL.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: Just learning
Originally posted by: Astrallite
I'd be more interested in finding out what GT300 really is--will it be a single card?

I find it hard to believe it will be faster than a GTX295, which would make me wonder if nvidia will simply say sayanara--EOL to you mister 295!

This wouldn't suprise me at all. It always seems the next generations top card is faster than the previous generation's X2 versions.

what? that happened with 8800gtx vs 7950gx2 and 3870x2 vs 4870, but what about our most recent example, 9800gx2 vs gtx 280? I'm not sure where they stand now, but at launch and for quite a while afterwards 9800gx2 was quite a bit faster than gtx 280 in most titles.

Here's a hint: if ATI continues to trounce nvidia's flagship with an x2 version, nvidia will continue to offer their own x2 version. If nvidia is smart they'll already have an x2 waiting in the wings to steal ati's thunder this fall.

edit: don't worry, if ati comes out with dx 11.1, nvidia will respond with CUDA 2.0. Everyone knows that 11.1 is bigger than 2.0, however, so ati wins ;)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Originally posted by: bryanW1995
Originally posted by: Just learning
Originally posted by: Astrallite
I'd be more interested in finding out what GT300 really is--will it be a single card?

I find it hard to believe it will be faster than a GTX295, which would make me wonder if nvidia will simply say sayanara--EOL to you mister 295!

This wouldn't suprise me at all. It always seems the next generations top card is faster than the previous generation's X2 versions.

what? that happened with 8800gtx vs 7950gx2 and 3870x2 vs 4870, but what about our most recent example, 9800gx2 vs gtx 280? I'm not sure where they stand now, but at launch and for quite a while afterwards 9800gx2 was quite a bit faster than gtx 280 in most titles.

Here's a hint: if ATI continues to trounce nvidia's flagship with an x2 version, nvidia will continue to offer their own x2 version. If nvidia is smart they'll already have an x2 waiting in the wings to steal ati's thunder this fall.

edit: don't worry, if ati comes out with dx 11.1, nvidia will respond with CUDA 2.0. Everyone knows that 11.1 is bigger than 2.0, however, so ati wins ;)


You are right if just efficiency improves but TDP is not increased what you are talking about might happen again. How much higher can TDP go?

http://forums.anandtech.com/me...=2294521&enterthread=y
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Originally posted by: chizow
Originally posted by: Just learning
Originally posted by: Astrallite
I'd be more interested in finding out what GT300 really is--will it be a single card?

I find it hard to believe it will be faster than a GTX295, which would make me wonder if nvidia will simply say sayanara--EOL to you mister 295!

This wouldn't suprise me at all. It always seems the next generations top card is faster than the previous generation's X2 versions.
I wouldn't expect GT300 to be significantly faster than current multi-GPU solutions given we just got i7 a few months ago and we're still seeing CPU bottlenecks for this generation of GPUs. A single-GPU may scale more efficiently, but I think it'll still cap out around the same performance level, ~1.5x until we see faster CPUs (perhaps Lynnfield with its integrated PCIE controller will add a bit of a boost). In the meantime, I'd expect GPU makers to push peripheral features like AA, SSAO, hardware physics as reasons to upgrade until more demanding titles are released.

Originally posted by: Wreckage
Charlie's just upset that NVIDIA will be the first to market with a DX11 card. While ATI may not follow till next year.
Maybe we'll get lucky and MS will add some other marginal feature in that timeframe. Then we'll get to hear about how awesome DX11.1 is for another 3 years. ;)

I do agree that it will be tough for Nvidia to pull of what they did between 7900 and 8800 but that was the generation that not only increased efficiency but dramatically increased TDP as well.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
These are recent benchmarks using 182.50.

185s are where the big improvement was seen in DX AA, honestly not sure about OpenGL as noone seems to test it anymore.

Its clear to see that everthing is heading towards EPIC(VliC)

Then I really don't understand why you aren't more interested in CUDA if you see EPIC as the next logical step in computing evolution. I'm not saying it is or it isn't, but CUDA is very much along the same lines as EPIC.

Actually there is. Thats why nVIDIA hasn't opted to DX10.1 because it requires a complete re-design of their TMU specs to be DX10.1 compliant.

The only hardware requirement for DX10.1 is the amount of registers available for shader hardware, 32x4 component between shader stages, 32 vertex shader input registers and 32 input assembler slots. That is the only explicit hardware requirement.

Proof? What do you mean by usable?

Try denormal handling on the alternative and get back to me ;)

If your right about nVIDIA supporting closer to full IEEE DP standards, then it is nice but not when its painfully slow.

Obliterating its' only real competitor doesn't normally fall under 'painfully slow'. AMD wasn't even trying to compete in the segment nV was going after so I am not trying to bash them, but they aren't remotely close to IEEE standards. Link.

Look at BFG10K's post. nVIDIA cards lose quite abit of performance as soon as one goes over 4xAA.

They used to, doesn't really happen under DX anymore. Honestly not sure about OpenGL atm.

Thats also true. But no need to imply that I own nVIDIA stock! Just what an engineer thinks naturally I guess.

Heh, sorry, didn't mean to imply you did, just pointing out that a smaller die size is good for the business side of things, as a consumer performance per watt is far more important.

Actually Derek seems to think the only difference in requirements between DX10 and DX11 are the inclusion of fixed hardware tesselation units. He's pretty clear about DX11 being a strict superset of DX10.1 with regard to hardware requirements. You can read his DX11 write-up, its in there somewhere.

Hitting the CS requirements will likely chew up significantly more die space then the tesselator.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Its clear to see that everthing is heading towards EPIC(VliC)

Then I really don't understand why you aren't more interested in CUDA if you see EPIC as the next logical step in computing evolution. I'm not saying it is or it isn't, but CUDA is very much along the same lines as EPIC.

We been threw this young skywalker Open CL makes more sense. It also works just fine with EPic(VLIW). WAy I think its great we have 3 companies doing there thing MAY the Strongest survive.
 

Spike

Diamond Member
Aug 27, 2001
6,770
1
81
Well... I'm not sure I understood most of what was said here and I am an engineer... ah well. When the new cards come out I will continue on with my tried and true method; whichever card plays my games the fastest at the res I care about and is the cheapest wins. It has worked so far, may it continue to!

BTW, I am excited about new features of the chips and wish I understood more about the technical side of things. It's just too much effort to fully understand them when I won't get paid any more for it :)
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Its clear to see that everthing is heading towards EPIC(VliC)

We been threw this young skywalker Open CL makes more sense. It also works just fine with EPic(VLIW).

OpenCL isn't anything remotely like EPIC, CUDA is very much like EPIC. I'm not supporting either side, but you are saying everything is heading towards an EPIC style architecture and then state that an EPIC style architecture is inferior- which is it?
 

HOOfan 1

Platinum Member
Sep 2, 2007
2,337
15
81
Charlie reminds me so much of Eric Cartman...so full of rage, and just overall a completely immoral person. Not to mention a few other resemblances
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,005
126
Originally posted by: BenSkywalker

honestly not sure about OpenGL as noone seems to test it anymore.
I?m sure because I test it, and I will continue to test it. ;)

I'd love to see just how fast a 4890 is under those games with 8xMSAA; I?m fairly certain it would beat a GTX285.

As for DX10.1, this side-by-side video from ATi is interesting:

http://www.youtube.com/watch?v=a7-_Uj0o2aI

According to the video, Battleforge showed a 22% improvement, HAWX showed a 21% improvement, and Stormrise showed a 25% improvement (on average).

Also one of our editors tested HAWX under DX10.1 and he observed a 13% performance gain (72.3 FPS to 81.8 FPS), though he was only running at 1280x1024 so he may've been CPU limited.

Of course some previous DX10.1 implementations were flaky and incorrect in the past, so hopefully for ATi's sake these new titles are fine.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: HOOfan 1
Charlie reminds me so much of Eric Cartman...so full of rage, and just overall a completely immoral person. Not to mention a few other resemblances

yes, but unlike cartman we do NOT have to respect his authoritai.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Originally posted by: Just learning
Originally posted by: Astrallite
I'd be more interested in finding out what GT300 really is--will it be a single card?

I find it hard to believe it will be faster than a GTX295, which would make me wonder if nvidia will simply say sayanara--EOL to you mister 295!

This wouldn't suprise me at all. It always seems the next generations top card is faster than the previous generation's X2 versions.

Can you list some examples of what you are talking about?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I?m sure because I test it, and I will continue to test it.

You said you tested with 182s, 185 series are the ones that brought about the big increase in 8x AA performance.
 

lopri

Elite Member
Jul 27, 2002
13,316
690
126
I am most curious whether NV will adopt GDDR5 this time. (Heck, even GDDR4!)

This has been puzzling to me for a long time since G92 was introduced. Why the heck NV improved the core (G80->G92) then crippled it with memory configuration? I know there was a cost saving from G80->G92 transition, but I can't help but imagine how G92 might have performed had it been mated with fast GDDR5. And considering all the drama NV has been playing with its products updates and namings since then, I wonder if G92 was originally meant to be paird with GDDR5. It didn't happen, though, as we know.

Now, will NV stick to GDDR3 even for its next gen (GT300)? If so, is it because:

1. ATI has patent and/or royalties of GDDR5, and Mr. Huang would rather die than give a dime to ATI.
2. ATI has existing contracts with GDDR5 manufacturers, and there is not enough supply for NV's demand, which is likely quite bigger than that of ATI's.
3. No conspiracy or market theory. Sticking to GDDR3 is by design.

Is there any rumor regarding this? What do you guys think about the imaginary G92+GDDR5?
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Originally posted by: lopri
I am most curious whether NV will adopt GDDR5 this time. (Heck, even GDDR4!)

This has been puzzling to me for a long time since G92 was introduced. Why the heck NV improved the core (G80->G92) then crippled it with memory configuration? I know there was a cost saving from G80->G92 transition, but I can't help but imagine how G92 might have performed had it been mated with fast GDDR5. And considering all the drama NV has been playing with its products updates and namings since then, I wonder if G92 was originally meant to be paird with GDDR5. It didn't happen, though, as we know.

Now, will NV stick to GDDR3 even for its next gen (GT300)? If so, is it because:

1. ATI has patent and/or royalties of GDDR5, and Mr. Huang would rather die than give a dime to ATI.
2. ATI has existing contracts with GDDR5 manufacturers, and there is not enough supply for NV's demand, which is likely quite bigger than that of ATI's.
3. No conspiracy or market theory. Sticking to GDDR3 is by design.

Is there any rumor regarding this? What do you guys think about the imaginary G92+GDDR5?

IMO there is no way that NV can stick with GDDR3 for next gen, and will have to move to GDDR5 no matter what.
GDDR4 is out of the market pretty much, since GDDR5 is much better. GDDR4 kind of never went anywhere.
If NV want to stay competitive with ATI they will need to increase memory bandwidth (since this is what always happens). Currently ATI has 256-bit memory buses but has in the past had 512-bit.
If GDDR5 doesn't increase in speeds enough for ATI they can increase their bus width quite easily I would expect (since they've been there before) to gain bandwidth.
NV are already at 512-bit for their high end cards and (arguably) there's no real way to go that far above 512-bit without a horribly expensive card, so going GDDR5 is the easy way to gain bandwidth.

I can't really see NV NOT going GDDR5 with their next gen cards, especially given that prices will be a lot lower and availability a lot higher.
Since NV released the GTX2x0's before ATI released their HD4870, and wanted good availability, it pretty much wasn't an option back then to use GDDR5 since it wasn't easily available enough, but one year on the market has probably changed meaning it'll be easy to make use of GDDR5. NV couldn't really take the gamble on limited supplies etc a year ago when they were launching their new cards.
GDDR5 was barely marketable when the HD4870 came out
http://www.anandtech.com/video/showdoc.aspx?i=3469&p=8 A bit of background on GDDR5 circa GTX/4800 launch time. ATI needed it, NV's design choice meant they didn't.

There's also no way that G92 could have hoped to use GDDR5 given its launch date, and maybe they could have added it with a respin, but I doubt that was in NV's mind if GDDR5 wasn't widely available when they were planning and doing the die shrink.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Originally posted by: OCguy
Originally posted by: Just learning
Originally posted by: Astrallite
I'd be more interested in finding out what GT300 really is--will it be a single card?

I find it hard to believe it will be faster than a GTX295, which would make me wonder if nvidia will simply say sayanara--EOL to you mister 295!

This wouldn't suprise me at all. It always seems the next generations top card is faster than the previous generation's X2 versions.

Can you list some examples of what you are talking about?

280GTX>9800x2 (285GTX compared to 9800GX2 would be even farther apart)

http://www.anandtech.com/video/showdoc.aspx?i=3517&p=8

I don't have the next comparison offhand but in the past a single 8800 GTX has definitely beat 7950Gx2.



 

alcoholbob

Diamond Member
May 24, 2005
6,390
469
126
Originally posted by: Just learning
Originally posted by: OCguy
Originally posted by: Just learning
Originally posted by: Astrallite
I'd be more interested in finding out what GT300 really is--will it be a single card?

I find it hard to believe it will be faster than a GTX295, which would make me wonder if nvidia will simply say sayanara--EOL to you mister 295!

This wouldn't suprise me at all. It always seems the next generations top card is faster than the previous generation's X2 versions.

Can you list some examples of what you are talking about?

280GTX>9800x2 (285GTX compared to 9800GX2 would be even farther apart)

http://www.anandtech.com/video/showdoc.aspx?i=3517&p=8

I don't have the next comparison offhand but in the past a single 8800 GTX has definitely beat 7950Gx2.

But G80 is much larger than G70, and GT200 is larger than G80 still--and R700 is smaller than all of them. It's certainly POSSIBLE for GT300 to be faster than the GTX295, but that would require nvidia consciously deciding on an architecture that's going to lose, lose, lose money for the performance crown.

Now they could certainly do that, but would require another round of layoffs to pull that off without going under.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Originally posted by: Astrallite
Originally posted by: Just learning
Originally posted by: OCguy
Originally posted by: Just learning
Originally posted by: Astrallite
I'd be more interested in finding out what GT300 really is--will it be a single card?

I find it hard to believe it will be faster than a GTX295, which would make me wonder if nvidia will simply say sayanara--EOL to you mister 295!

This wouldn't suprise me at all. It always seems the next generations top card is faster than the previous generation's X2 versions.

Can you list some examples of what you are talking about?

280GTX>9800x2 (285GTX compared to 9800GX2 would be even farther apart)

http://www.anandtech.com/video/showdoc.aspx?i=3517&p=8

I don't have the next comparison offhand but in the past a single 8800 GTX has definitely beat 7950Gx2.

But G80 is much larger than G70, and GT200 is larger than G80 still--and R700 is smaller than all of them. It's certainly POSSIBLE for GT300 to be faster than the GTX295, but that would require nvidia consciously deciding on an architecture that's going to lose, lose, lose money for the performance crown.

Now they could certainly do that, but would require another round of layoffs to pull that off without going under.

But what if GT300 goes to DDR5 memory?

Wouldn't that let them reduce the size of the memory interface (or whatever it is called) on the die from 512 bit to 256 bit or 394 bit?

So it is possible that a GT300 single GPU could beat 295 GTX. (40nm plus DDR5 would free up a lot of space on the die for additional graphical computing power)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Originally posted by: Lonyo
Originally posted by: lopri
I am most curious whether NV will adopt GDDR5 this time. (Heck, even GDDR4!)

This has been puzzling to me for a long time since G92 was introduced. Why the heck NV improved the core (G80->G92) then crippled it with memory configuration? I know there was a cost saving from G80->G92 transition, but I can't help but imagine how G92 might have performed had it been mated with fast GDDR5. And considering all the drama NV has been playing with its products updates and namings since then, I wonder if G92 was originally meant to be paird with GDDR5. It didn't happen, though, as we know.

Now, will NV stick to GDDR3 even for its next gen (GT300)? If so, is it because:

1. ATI has patent and/or royalties of GDDR5, and Mr. Huang would rather die than give a dime to ATI.
2. ATI has existing contracts with GDDR5 manufacturers, and there is not enough supply for NV's demand, which is likely quite bigger than that of ATI's.
3. No conspiracy or market theory. Sticking to GDDR3 is by design.

Is there any rumor regarding this? What do you guys think about the imaginary G92+GDDR5?

IMO there is no way that NV can stick with GDDR3 for next gen, and will have to move to GDDR5 no matter what.
GDDR4 is out of the market pretty much, since GDDR5 is much better. GDDR4 kind of never went anywhere.
If NV want to stay competitive with ATI they will need to increase memory bandwidth (since this is what always happens). Currently ATI has 256-bit memory buses but has in the past had 512-bit.
If GDDR5 doesn't increase in speeds enough for ATI they can increase their bus width quite easily I would expect (since they've been there before) to gain bandwidth.
NV are already at 512-bit for their high end cards and (arguably) there's no real way to go that far above 512-bit without a horribly expensive card, so going GDDR5 is the easy way to gain bandwidth.

I can't really see NV NOT going GDDR5 with their next gen cards, especially given that prices will be a lot lower and availability a lot higher.
Since NV released the GTX2x0's before ATI released their HD4870, and wanted good availability, it pretty much wasn't an option back then to use GDDR5 since it wasn't easily available enough, but one year on the market has probably changed meaning it'll be easy to make use of GDDR5. NV couldn't really take the gamble on limited supplies etc a year ago when they were launching their new cards.
GDDR5 was barely marketable when the HD4870 came out
http://www.anandtech.com/video/showdoc.aspx?i=3469&p=8 A bit of background on GDDR5 circa GTX/4800 launch time. ATI needed it, NV's design choice meant they didn't.

There's also no way that G92 could have hoped to use GDDR5 given its launch date, and maybe they could have added it with a respin, but I doubt that was in NV's mind if GDDR5 wasn't widely available when they were planning and doing the die shrink.

ATI is claiming to use 2nd Generation DDR5 on the upcoming HD4770.

If the newer forms of DDR5 starting clocking a lot better (with better development) reducing the memory interface could free up more room for processing power.

If current DDR3 is clocking 2000MHz shouldn't DDR5 theorectically be capable of way more than the current 3600Mhz or whatever it is?

 

BFG10K

Lifer
Aug 14, 2000
22,709
3,005
126
Originally posted by: BenSkywalker

You said you tested with 182s, 185 series are the ones that brought about the big increase in 8x AA performance.
What I meant is that I continue to test OpenGL games. When the 185 series becomes official, I'll be sure to test it.

I haven't seen any evidence suggesting 8xMSAA has improved in OpenGL with the 185 drivers; do you have any links?

Also anecdotal evidence suggests performance in Far Cry 2 with 8xAA is lower: http://forums.guru3d.com/showthread.php?p=3095187
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Also anecdotal evidence suggests performance in Far Cry 2 with 8xAA is lower

This site doesn't agree. Across the board 8x AA performance is up, in one case at least way up. Again, I haven't seen anything about OpenGL performance, but it seems that this driver revision seemed to fix titles where there was a sharp performance drop off under DirectX, now it seems the performance impact is in line with what ATi parts see.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: Just learning
ATI is claiming to use 2nd Generation DDR5 on the upcoming HD4770.

If the newer forms of DDR5 starting clocking a lot better (with better development) reducing the memory interface could free up more room for processing power.

Is that from the quote on that ATI promotional slide?

http://img701.photo.wangyou.co...20090414013935_8_1.jpg

If so, I don't think they meant "2nd Generation DDR5" as in it's a new version of DDR5. I read it to mean that this is their 2nd generation of card to use DDR5.

 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: BenSkywalker
Also anecdotal evidence suggests performance in Far Cry 2 with 8xAA is lower

This site doesn't agree. Across the board 8x AA performance is up, in one case at least way up. Again, I haven't seen anything about OpenGL performance, but it seems that this driver revision seemed to fix titles where there was a sharp performance drop off under DirectX, now it seems the performance impact is in line with what ATi parts see.

Bit-Tech also showed a significant boost in performance with the 185s over 182s, particularly with AA and 2560, although they didn't test that many titles with 8xAA. Fallout 3 actually dropped off a bit in performance.

I ran a few benches in Crysis Warhead with 4xAA and 8xAA and there was very little performance drop, <1 FPS. But its very difficult to tell any difference between 4xAA and 8xAA in the screen caps at 1920. FBWT's tool doesn't scale down resolutions properly to better show aliasing, so I tried HOC's bench tool, however, with the 185s and AA applied, .cfg settings don't stick so Motion Blur can't be turned off, making accurate screen caps and comparisons impossible.

The 185s are odd though in a few other ways. For me at least in Vista 64, dxdiag shows video memory as only 494MB, down from ~3GB previously. It could just be a reduction in its aperture size/virtual memory footprint (similar to the July 07 hot fix), but its still a bit disconcerting. There's also an error message on install for a lot of people about nvcp.dll, although it doesn't impact use of the CP for me and most others.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Lonyo
Originally posted by: lopri
I am most curious whether NV will adopt GDDR5 this time. (Heck, even GDDR4!)

This has been puzzling to me for a long time since G92 was introduced. Why the heck NV improved the core (G80->G92) then crippled it with memory configuration? I know there was a cost saving from G80->G92 transition, but I can't help but imagine how G92 might have performed had it been mated with fast GDDR5. And considering all the drama NV has been playing with its products updates and namings since then, I wonder if G92 was originally meant to be paird with GDDR5. It didn't happen, though, as we know.

Now, will NV stick to GDDR3 even for its next gen (GT300)? If so, is it because:

1. ATI has patent and/or royalties of GDDR5, and Mr. Huang would rather die than give a dime to ATI.
2. ATI has existing contracts with GDDR5 manufacturers, and there is not enough supply for NV's demand, which is likely quite bigger than that of ATI's.
3. No conspiracy or market theory. Sticking to GDDR3 is by design.

Is there any rumor regarding this? What do you guys think about the imaginary G92+GDDR5?

IMO there is no way that NV can stick with GDDR3 for next gen, and will have to move to GDDR5 no matter what.
GDDR4 is out of the market pretty much, since GDDR5 is much better. GDDR4 kind of never went anywhere.
If NV want to stay competitive with ATI they will need to increase memory bandwidth (since this is what always happens). Currently ATI has 256-bit memory buses but has in the past had 512-bit.
If GDDR5 doesn't increase in speeds enough for ATI they can increase their bus width quite easily I would expect (since they've been there before) to gain bandwidth.
NV are already at 512-bit for their high end cards and (arguably) there's no real way to go that far above 512-bit without a horribly expensive card, so going GDDR5 is the easy way to gain bandwidth.

I can't really see NV NOT going GDDR5 with their next gen cards, especially given that prices will be a lot lower and availability a lot higher.
Since NV released the GTX2x0's before ATI released their HD4870, and wanted good availability, it pretty much wasn't an option back then to use GDDR5 since it wasn't easily available enough, but one year on the market has probably changed meaning it'll be easy to make use of GDDR5. NV couldn't really take the gamble on limited supplies etc a year ago when they were launching their new cards.
GDDR5 was barely marketable when the HD4870 came out
http://www.anandtech.com/video/showdoc.aspx?i=3469&p=8 A bit of background on GDDR5 circa GTX/4800 launch time. ATI needed it, NV's design choice meant they didn't.

There's also no way that G92 could have hoped to use GDDR5 given its launch date, and maybe they could have added it with a respin, but I doubt that was in NV's mind if GDDR5 wasn't widely available when they were planning and doing the die shrink.
Early reports did have Nvidia transitioning to GDDR5 for their 40nm parts, GT300 included. The cancelled GT212 (40nm incremental GT200 update with more SP) was supposed to actually move to GDDR5 as well. The issue for Nvidia in the past has been that their ROP clusters are tied to their memory controllers. One would think a simple reorganization from 4 ROP for each 64-bit mem controller to 8 ROP for each 64-bit mem controller would adequately address the issue, but its been the same way since G80 and the reason G92 ended up being crippled with respect to G80 in many aspects (ROP, VRAM, bandwidth).

Personally I think Nvidia's decision to stay with GDDR3 for this last generation of GT200 parts was a good idea for many of the same reasons you mentioned. GDDR5 undoubtedly hurt the 4870's launch availability relative to the GDDR3 4850 and significantly delayed volume production of their 1GB version, especially given the 4870X2 required 2GB GDDR5. From a performance standpoint, GDDR3 has actually continued scaling well beyond expected limits to the point Nvidia parts on a larger bus actually have more bandwidth than what GDDR5 provides. GTX 285s for example are hitting close to 3GHz effective overclocked on a 512-bit bus, whereas 4890s are capping ~1200MHz (4,800 effective) overclocked on a 256-bit bus. Normalizing the GTX 285 to a 256-bit bus, that's ~5.8GHz vs 4.8GHz effective.

Personally I'm hoping Nvidia retains the wide bus and adopts GDDR5, although that seems unlikely, it would certainly open up headroom so that bandwidth isn't a concern for that generation of parts. At the very least, Nvidia should really decouple their ROPs from memory controllers to allow for design flexibility on future parts.