NVIDIA confirms Next-Gen close to 1TFlop in 4Q07

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Link

In recent analyst conferences that were publicly webcast on NVIDIA's website, Michael Hara (VP of Investor Relations) has claimed that their next-generation chip, also known as G92 in the rumour mill, will deliver close to one teraflop of performance. In a separate answer to an analyst's question, he also noted that they have no intention from diverging from the cycle they have adopted with the G80, which is to have the high-end part ready at the end of the year and release the lower-end derivatives in the spring.

Assuming that NVIDIA manages to hit these aggressive release schedules, it implies that the chip will compete with any potential R6xx refresh at the beginning of its lifetime, but also eventually with R700 as it seems unlikely NVIDIA will refresh again before the second half of 2008, unless they go for an optical shrink from 65nm to 55nm. It also remains to be seen how aggressive ATI will be on the process front this time around.

There also were a number of other highlights during the conference, including a major emphasis on GPGPU (aka 'GPU Computing') and a short mention of Intel's upcoming GPU efforts through their Larrabee project. Micahel Hara seemed far from certain about Intel's exact strategy there, although he did mention that it was possible Intel was more interested in the GPGPU market than the gaming one. This is something we have already said in the past.

And finally, he mentioned that although he does not believe R600 will have any impact on their G80 sales, RV610 and RV630 are much more competitive parts that are likely to gain traction in the marketplace. He argued that he was not convinced 65nm gave AMD a real advantage in terms of costs because of the yield curve, and seemed confident that their own 65nm mainstream parts will be superior. We can't help but wonder how much that matters when you release them 9 months later, though? It will also be interesting to see who's first to 55nm, and how good of a half-node it will be.

65nm parts are on the way, including this G92 (65nm 99% probably) which seems to be the refresh of G80.

Note that without the MUL, the G80(ultra) has 384GFlops.
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
Nice. A single card doing a teraflop is impressive. Especially since R600 could only achieve a teraflop in Crossfire and R600 has 60% more theoretical processing power than G80.

I also like the fact that they are going straight to 65nm for their high end part and not messing with 80nm.

This card should be a monster right in time for true DX10 games. Although I am a little upset at the fact that we wont see a G80 refresh with a die shrink to 80nm.
 

baronzemo78

Member
Sep 8, 2006
29
0
0
I really hope these cards are shorter then the 8800GTX. I can't get one because it won't fit in my Tsunami Dream case. :(
 

rise

Diamond Member
Dec 13, 2004
9,116
46
91
repost, you didn't see my post 14 down :p

good stuff though i was hoping it might launch a bit earlier as i'd like to get another gts and then stepup to this.
 

Zenoth

Diamond Member
Jan 29, 2005
5,202
216
106
Well, the more it goes, and the lengthier I can imagine the cards will be, not to mention hotter. Where did the whole miniaturization concept went ? I don't know. It was perhaps more of an engineering utopia more than anything else, apparently ...
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
I'll probably hold off getting a 8800 GTX and wait for one of these instead.

20/20 hindsight demands I should've purchased a 8800 GTX back in November when I got the 8800 GTS but it's probably not worth doing it now.

Well, the more it goes, and the lengthier I can imagine the cards will be, not to mention hotter.
Hopefully a die shrink will offset some/all of this. I too prefer cards to be no longer than a 8800 GTS/7900 GTX.
 

Stoneburner

Diamond Member
May 29, 2003
3,491
0
76
1 teraflop? this is starting to sound like Nvidia's NV30 moment. TeraFLOP indeed.

Since the speculation has already started, I figured I'd be the first to start decrying :)
 

TanisHalfElven

Diamond Member
Jun 29, 2001
3,512
0
76
Originally posted by: BFG10K
I'll probably hold off getting a 8800 GTX and wait for one of these instead.

20/20 hindsight demands I should've purchased a 8800 GTX back in November when I got the 8800 GTS but it's probably not worth doing it now.

Well, the more it goes, and the lengthier I can imagine the cards will be, not to mention hotter.
Hopefully a die shrink will offset some/all of this. I too prefer cards to be no longer than a 8800 GTS/7900 GTX.

i doubt i die shrink will help much. it the high memory bandwidth that making cards longer and larger. more traces, more chips.
 

Lord Banshee

Golden Member
Sep 8, 2004
1,495
0
0
Ohh looks like it might be my next card next summer when finaly do a full update of a new computer. For now i think i will get a x1950xt 512MB as it can mod it to a firegl 7350 :)
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
You guys are forgetting that G80 is a 90nm chip while the HD2900XT is a 80nm chip.

Despite R600 having a half-node process advantage, G80 still manages to consume less power and run cooler.

If this link is true, then Nvidia's next high end chip is going to use 65nm, which should go a looong way in reducing power consumption and heat.

Just look what a half node shrink of G70 did for power consumption and heat.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: BFG10K
Memory chips can be die shrunk too. :p

That doesnt make the card physically smaller, its still the same number of traces.

It will however consume less power, requiring less caps.

G92 is very likely to be a tweaked and die shrunk G80.
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
Originally posted by: Acanthus
Originally posted by: BFG10K
Memory chips can be die shrunk too. :p

That doesnt make the card physically smaller, its still the same number of traces.

It will however consume less power, requiring less caps.

G92 is very likely to be a tweaked and die shrunk G80.

I think it'll be more than that. It's gotta have more shaders or it's going to take a massive clock speed increase to get 3x the processing power of an 8800u.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
In GPU computing, i dont think they count in the MULs. So basically to reach a tflop this G92 can have something along the lines of 192 SP clocked at 2.5ghz. Yes 2.5ghz, its not impossible given the 90 to 65nm jump.

However, im guessing the ALU:TEX ratio might change quite abit. Theres also speculations of jumping back to a 256bit bus while using fast GDDR4 memory e.g GDDR4 @ 2800mhz (0.714ns) is a bandwidth of 89.6gb/s. This makes the die size abit smaller while the complexity of the PCB is also reduced. We know for a fact bandwidth isnt an important factor (right now) in gaming performance as seen by multiple products e.g X1950XTX vs X1900XTX

This G92 for sure will go up against R650 or any R6x0 derivative that will replace R600 later on this year.
 

lopri

Elite Member
Jul 27, 2002
13,310
687
126
I wish we'd see this as a fall refresh part.. instead of 6-month now we have a 1-year cycle. Maybe I should have kept my GTX..
 

CrystalBay

Platinum Member
Apr 2, 2002
2,175
1
0
I dunno, if it has more shaders, more clock speed it is going to have more transistors + more power usage and heat.
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
Originally posted by: CrystalBay
I dunno, if it has more shaders, more clock speed it is going to have more transistors + more power usage and heat.

+ a full node shrink which will offset any increased power consumption.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Matt2
Originally posted by: Acanthus
Originally posted by: BFG10K
Memory chips can be die shrunk too. :p

That doesnt make the card physically smaller, its still the same number of traces.

It will however consume less power, requiring less caps.

G92 is very likely to be a tweaked and die shrunk G80.

I think it'll be more than that. It's gotta have more shaders or it's going to take a massive clock speed increase to get 3x the processing power of an 8800u.

You can add more shaders while keeping the same overall architecture...

By "tweaked" i mean Geforce to Geforce 2, Geforce 3 to Xbox to geforce 4, geforce 6000 series to geforce 7000 series... etc.

All of those had the same overall architecture as their preceeding chip, just die shrunk, and improved.

I dont think nvidia dumped $500m and 5 years into the architecture to abandon it after 1 year ;)
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Matt2
Originally posted by: CrystalBay
I dunno, if it has more shaders, more clock speed it is going to have more transistors + more power usage and heat.

+ a full one and one half node shrink which will offset any increased power consumption.

Isnt 65nm one full node shrink from 90nm?

80nm is half node.
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
Originally posted by: Acanthus
Originally posted by: Matt2
Originally posted by: Acanthus
Originally posted by: BFG10K
Memory chips can be die shrunk too. :p

That doesnt make the card physically smaller, its still the same number of traces.

It will however consume less power, requiring less caps.

G92 is very likely to be a tweaked and die shrunk G80.

I think it'll be more than that. It's gotta have more shaders or it's going to take a massive clock speed increase to get 3x the processing power of an 8800u.

You can add more shaders while keeping the same overall architecture...

By "tweaked" i mean Geforce to Geforce 2, Geforce 3 to Xbox to geforce 4, geforce 6000 series to geforce 7000 series... etc.

All of those had the same overall architecture as their preceeding chip, just die shrunk, and improved.

I dont think nvidia dumped $500m and 5 years into the architecture to abandon it after 1 year ;)

ok, that makes tons more sense. :)

They'll definitely keep the same architecture, but it wont be a simple clock speed boost. That's what I thought you were saying.

And yes, 90nm -> 65nm is a full node. I will edit accordingly.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: Cookie Monster
In GPU computing, i dont think they count in the MULs. So basically to reach a tflop this G92 can have something along the lines of 192 SP clocked at 2.5ghz. Yes 2.5ghz, its not impossible given the 90 to 65nm jump.

However, im guessing the ALU:TEX ratio might change quite abit. Theres also speculations of jumping back to a 256bit bus while using fast GDDR4 memory e.g GDDR4 @ 2800mhz (0.714ns) is a bandwidth of 89.6gb/s. This makes the die size abit smaller while the complexity of the PCB is also reduced. We know for a fact bandwidth isnt an important factor (right now) in gaming performance as seen by multiple products e.g X1950XTX vs X1900XTX

This G92 for sure will go up against R650 or any R6x0 derivative that will replace R600 later on this year.

I think it would be something like a 6800 to 7800 transition, with the primary difference being assuming this is a full node shrink and Nvidia is actually going to take the risk of building a high end SKU on 65nm as early as this in the game. This would allow some to alot of saving depending on how much logic Nvidia will add.

More Shader Units, More Clockspeed, die size marginally reduced becuase of the additonal core logic of the increased number of units, PV2 added to the high end, the units themselves improved. Keeping the 384 Bit Interface though power consumption should be reduced some like maybe down to 90-100W. Move to GDDR4 to help reduce power consumption. The trend is to keep increasing bandwidth rather then ever decrease it on the high end. 384 Bit Interface should remain on the high end but NV wouldn't have to increase it with the 9 Series.

 

thilanliyan

Lifer
Jun 21, 2005
12,042
2,257
126
Originally posted by: Matt2
You guys are forgetting that G80 is a 90nm chip while the HD2900XT is a 80nm chip.

Despite R600 having a half-node process advantage, G80 still manages to consume less power and run cooler.

If this link is true, then Nvidia's next high end chip is going to use 65nm, which should go a looong way in reducing power consumption and heat.

Just look what a half node shrink of G70 did for power consumption and heat.


A smaller process doesn't necessarily mean cooler running (ie. Intel Prescott vs Northwood) if leakage is a problem (which apparently R600 suffers from). Also, comparing R600 to G80, R600 has more transistors doesn't it?? And also R600 has the display chip (I/O chip??) integrated while G80 has it separately. Those two factors coupled with leakage would lead to the hotter running R600.

I don't think moving to 65nm will give NVidia a huge advantage in power consumption and heat output unless they radically change the architecture.

Despite what people may think, G80 cards do run hot but the coolers are fairly good so you don't notice it very much. My card goes up to 85 celcius (26-27C ambient, closed case) with the current overclock but without volt mods. I find that under load the cooling fins are blazing hot so I am coming up against the limit of stock cooling. I even substituted stock cooling for a 120mm fan blowing right onto the cooling fins and heatpipes but the fins were still blazing hot.