The New and Improved "G80 Stuff"

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: keysplayr2003

Do you know how many transistors resides in the current 7900's? Under 300million. On 90nm. Now to me, 700million on 90nm sounds impossibly high. Especially with a base core clock of 575MHz. 7900GTX is only 75MHz faster core with less than 300 million trannies on .09u and look at the huge cooler needed for it. They must have done something real interesting with the process to allow something this large, clocked that high. Lets not even talk about the shaders clocked somewhere in the stratosphere.

ATI did 384 Million Transistors on 90nm process, also at 650MHZ.

700 Million is quite leap but as I have interated in the past, graphics functionality is very expensive to implement, when your talking about additional features. It will be very interesting to see how this die is arranged. Suffice it to say the total die size of this product will be in the 500mm2 range using the G71 transistors density as a baseline.



 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
The only possible way I can see 700m transistors and two cores being true is is most of the extra transistors are cache and possibly embedded framebuffer like xenos has.

Given that nvidia typically has not copied ATi in the past and xenos itself isn't Sm4/DX10 compliant, I'm guessing this ins't terribly likely at the whole 700m/two core rumor is merely a red herring to gull the gullible.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Cookie Monster
Originally posted by: Acanthus
Originally posted by: Cookie Monster
Originally posted by: Acanthus
Originally posted by: keysplayr2003
Originally posted by: nitromullet
Originally posted by: Gstanfor
Thats only vendor overclocking being banned if I'm reading that correctly (is there a correct way to read an INQ article?) I imagine endusers will still be able to overclock.

Oh yeah, I'm quite sure that NVIDIA will continue to allow enduser OCing via coolbits. I'm also not really all that concerned about vendor OC's, but it will be interesting to see how the vendors try to distinguish themselves when they are all bound to stock clocks. I also think that it is probably somewhat indicative of how hard NV is pushing these cards at stock though. I'm guessing they won't be the best OCer's, but we'll see.


I'm just amazed they're getting 575MHz with 700million transistors at 90nm. That is probably pushing the envelope right there, hence nvidia's apprehension to OEM factory o/c's.

Its not a single chip.

It WILL be a single chip. No doubt about that. I say this because nv40 had 222million transistors on 130nm. 700million on 90nm doesnt sound impossible.

So you are saying they have different clockspeeds on different parts in the same ginormous chip?

k.


Its called clock domains where different parts of the GPU run at a different speed. This has been done by nVIDIA since NV40.

We will see, im not going to debate this bullshit.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: Acanthus
Originally posted by: Cookie Monster
Originally posted by: Acanthus
Originally posted by: Cookie Monster
Originally posted by: Acanthus
Originally posted by: keysplayr2003
Originally posted by: nitromullet
Originally posted by: Gstanfor
Thats only vendor overclocking being banned if I'm reading that correctly (is there a correct way to read an INQ article?) I imagine endusers will still be able to overclock.

Oh yeah, I'm quite sure that NVIDIA will continue to allow enduser OCing via coolbits. I'm also not really all that concerned about vendor OC's, but it will be interesting to see how the vendors try to distinguish themselves when they are all bound to stock clocks. I also think that it is probably somewhat indicative of how hard NV is pushing these cards at stock though. I'm guessing they won't be the best OCer's, but we'll see.


I'm just amazed they're getting 575MHz with 700million transistors at 90nm. That is probably pushing the envelope right there, hence nvidia's apprehension to OEM factory o/c's.

Its not a single chip.

It WILL be a single chip. No doubt about that. I say this because nv40 had 222million transistors on 130nm. 700million on 90nm doesnt sound impossible.

So you are saying they have different clockspeeds on different parts in the same ginormous chip?

k.


Its called clock domains where different parts of the GPU run at a different speed. This has been done by nVIDIA since NV40.

We will see, im not going to debate this bullshit.

Link1

Link2

Is that better now?
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Cookie Monster
Originally posted by: Acanthus
Originally posted by: Cookie Monster
Originally posted by: Acanthus
Originally posted by: Cookie Monster
Originally posted by: Acanthus
Originally posted by: keysplayr2003
Originally posted by: nitromullet
Originally posted by: Gstanfor
Thats only vendor overclocking being banned if I'm reading that correctly (is there a correct way to read an INQ article?) I imagine endusers will still be able to overclock.

Oh yeah, I'm quite sure that NVIDIA will continue to allow enduser OCing via coolbits. I'm also not really all that concerned about vendor OC's, but it will be interesting to see how the vendors try to distinguish themselves when they are all bound to stock clocks. I also think that it is probably somewhat indicative of how hard NV is pushing these cards at stock though. I'm guessing they won't be the best OCer's, but we'll see.


I'm just amazed they're getting 575MHz with 700million transistors at 90nm. That is probably pushing the envelope right there, hence nvidia's apprehension to OEM factory o/c's.

Its not a single chip.

It WILL be a single chip. No doubt about that. I say this because nv40 had 222million transistors on 130nm. 700million on 90nm doesnt sound impossible.

So you are saying they have different clockspeeds on different parts in the same ginormous chip?

k.


Its called clock domains where different parts of the GPU run at a different speed. This has been done by nVIDIA since NV40.

We will see, im not going to debate this bullshit.

Link1

Link2

Is that better now?

Uhh no, because all the GPU clocks match in those shots. 6% is a hell of a sight from 300%.

If its 700m transistors on 90nm on a single chip, and any part of that chip is over 1ghz, i will make you a full page apology post.

As for now, what you are saying is so retarded it's incomprehensible to this computer engineering student.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Deal. A full page apology post. Im fine with that.

Here let me make clear it out.

Multiple Clock Domains (MCD). Its an architecture technique where a seperate functional units/or sets of units on a chip are operated at a different clocks (frequnecy). This technique improves efficeny when different portions of the chip have imbalanced workloads. For example, an GPU architect may want to slow down the vertex engine to save power when the processor is fill bound. Again the decoupling inherent in the graphics pipeline is advantageous to the architect here, making the implementation of this technique on the GPU far easier than its equivalent on a general purpose processor. In fact, existing GPUs already make heavy use of non-dynamic MCD, so little modification should be necessary to take advantage of these clocks for thermal management.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Cookie Monster
Deal. A full page apology post. Im fine with that.

Here let me make clear it out.

Multiple Clock Domains (MCD). Its an architecture technique where a seperate functional units/or sets of units on a chip are operated at a different clocks (frequnecy). This technique improves efficeny when different portions of the chip have imbalanced workloads. For example, an GPU architect may want to slow down the vertex engine to save power when the processor is fill bound. Again the decoupling inherent in the graphics pipeline is advantageous to the architect here, making the implementation of this technique on the GPU far easier than its equivalent on a general purpose processor. In fact, existing GPUs already make heavy use of non-dynamic MCD, so little modification should be necessary to take advantage of these clocks for thermal management.

I understand entirely what you are saying.

I am saying that is not the case here. It is far more likely that there are 2 different cores sharing the load.

You can see in the leaked G80 images that are clearly 2 dies on package.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Cookie Monster
We will see. So your saying 2 dies in on one chips e.g smithfield.

Yes, judging by the back of the PCB, the transistor count, the clockspeed differential, and the manufacuring issues you would have with a 700m transistor chip. This would also go along with the rumor of the 384bit bus actually being 2 seperate busses.

256bit for one, 128bit for the other.
 

lopri

Elite Member
Jul 27, 2002
13,316
690
126
Below is a repost from other thread. Sorry about quoting myself but I felt this issue regarding market competition is worth mentioning twice. If we remember how the 7800 GTX 512's price evolved, and how the 7900 series prices were like - merely less than 6 months past the GTX 512's launch -, the importance of competition for consumers is more than obvious.
Originally posted by: Crusader
Err well, I'd have to say that most everyone attempting to make 30% look like a small gain.. most likely because ATI cant handle this chip as they wont have an answer for 6 months.
By that time, a guy might as well wait for the G80 refresh.. dominating ATI yet again.
The bottom line is that this is the fastest, most advanced card to ever be produced in the world.. and now thats somehow not enough? Nice try. :disgust:
See, this is exactly what I hate to see. And makes me hate ATI. :frown:

1. With the market at its hand (fastest GPU untill R600 debut / first GPU to support DX10), NV will charge whatever they feel like they can get away with. $650 for a GTX? God.. I really hope AMD/ATI won't abandon the high-end GPU market.

2. Anyone noticed the G80 spec indicates something along the line of 8-quad? (8 TCP or something like that) Then, upon close inspection the 8800 GTX spec says it's a 7 TCP part, and 8800 GT looks to be a 6 TCP part. My guess? The good 8 TCP part is being secretely stockpiled until R600 launches, and NV will launch it w/ 1GB of on-board RAM. The price of that part is anyone's guess, but let's say I have a very good idea.

If the performance increase is 30% above 1950 XTX (therefore 35~40% above 7900 GTX), it's such a disappointment for me. Especially knowing that they're probably collecting parts that'll do 40% above 1950 XTX for later time. And especially knowing what we've heard so far about G80. (Heat, Power, Price) It's been reported that the quoted 30% figure was achieved with a Kentsfield, so even that number could have been skewed.

At least here is a hoping that NV has vastly improved the nagging IQ issues that pops up endlessly, and give us a new level of IQ that we haven't seen. And I think they'll get that right this time. DX10 is, IMO, a non-issue at this time other than marketing purposes.
P.S. I didn't update the first post with recent performance figures rumours in that I didn't feel like they're on a solid ground yet.
 

lopri

Elite Member
Jul 27, 2002
13,316
690
126
Here is a finding: According to the German site leak, the 8800 GTX will have an enormous theoretical pixel filling rate of 36800 MP/s.(!) Compared to that, 7950 GX2's theoretical fill rate is 16000 MP/s, and that of 7900 GTX is 10400 MP/s. Nothing really makes sense with regard to G80. NV must be doing a hell of job confusing the leakers.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: lopri
Here is a finding: According to the German site leak, the 8800 GTX will have an enormous theoretical pixel filling rate of 36800 MP/s.(!) Compared to that, 7950 GX2's theoretical fill rate is 16000 MP/s, and that of 7900 GTX is 10400 MP/s. Nothing really makes sense with regard to G80. NV must be doing a hell of job confusing the leakers.

Are you sure that compares to the Peak output fillrate or the Single/MultTexturing Fillrates of G71 Derivatives.

36800 would make much more sense again the 7950 GX2's 24000 and the 7900 GTX 15600.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
This is an interesting twist on the power consumption thing IMO...

http://www.pcpower.com/products/power_supplies/max-performance/

PCP&C no longer has a mutli-rail 1KW psu... They used to have three 1KW multi-rail models. They all had same rails, but different branding for Xfire and SLI and one with two 6-pin molexes instead of four. They now have a one with a single 72A 12v rail. IIRC PCP&C psu's are used by both NV and ATI at hardware conferences etc to show off the new gear, so I would assume that PCP&C has a relationship with both chipmakers. I am curious if this psu is in response to some info obtained from NV or testing done with 8800GTX SLI...
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
You should add this to the first post (full credit to Creig for the find/post):

Originally posted by: Creig
Asus EN8800GTX box, card, software, features and specs



EN8800GTX/HTDP/768M , is: USD $540 (FOB).
EN8800GTS/HTDP/640M , is: USD $410 (FOB).

ASUS Exclusive Innovations

HDCP Compliant: Allow playback of HD DVD, Blu-Ray Disc and other protected content at full HDresolutions
Built for Microsoft? Windows Vista?
ASUS Splendid: Watching movies on PC is as good as on Top-of-the-line consumer television
ASUS Video Security Online: Keep an eye on your home at all times no matter where you are
ASUS Game LiveShow: Stream live gaming action onto the internet and share with other gaming enthusiast
ASUS Game Replay: Recording gaming action and strategy into MPEG4 files to share with other gaming enthusiast
ASUS Game FaceMessenger: Easy IM and live game conference in any PC games
ASUS OnScreenDisplay: Adjust the fame settings and enhance gaming experience without leaving the game


Graphics GPU Feature

?? NVIDIA? GeForce 8800GTX
?? Built for Microsoft? Windows Vista??? NVIDIA SLI? Technology ready
?? NVIDIA? unified architecture with GigaThread? technology
?? Full support for Microsoft DirectX10.0 and ShaderModel 4.0 enables stunning and complex specialeffects
?? OpenGL?2.0 support
?? NVIDIA? Quantum Effects? Technology
?? True 128-bit floating point high dynamic-range (HDR) lighting
?? Two dual-link DVI outputs support two 2560x1600 resolution displays


Hardware Specification

Model EN8800GTX/HTDP/768M
Graphics Engine GeForce 8800GTX
Video Memory 768MB DDR3
Engine Clock 575MHz
Memory Clock 1.8GHz (900MHz DDR3)
Memory Interface 384-bit
Max. Resolution Up to 2560 x 1600
Bus Standard PCI Express X16
VGA Output YES, via DVI to VGA Adapter
HDTV Output YES, via HDTV Out cable
TV Output YES, via S-Video to Composite
DVI Output DVI-I
Dual DVI Output YES
HDCP Compliant YES

Adaptor/Cable Bundled DVI to VGA adapter
Power Cable*2
HDTV-out cable

Software Bundled 3D Game: Ghost Recon, GTI Racing
3Dmark06
ASUS Utilities & Driver

[??]
Specifications are subject to change without notice.
?PCB color and bundled software versions are change without notice.
?Brand and product names mentioned are trademarks of their respective company.
?? Note: Two 6-pins supplementary power connectors inside the box

 

lopri

Elite Member
Jul 27, 2002
13,316
690
126
Thank you. Adding as I speak. That confirms just about everything other than performance figures.
 

lopri

Elite Member
Jul 27, 2002
13,316
690
126
Originally posted by: nitromullet
This is an interesting twist on the power consumption thing IMO...

http://www.pcpower.com/products/power_supplies/max-performance/

PCP&C no longer has a mutli-rail 1KW psu... They used to have three 1KW multi-rail models. They all had same rails, but different branding for Xfire and SLI and one with two 6-pin molexes instead of four. They now have a one with a single 72A 12v rail. IIRC PCP&C psu's are used by both NV and ATI at hardware conferences etc to show off the new gear, so I would assume that PCP&C has a relationship with both chipmakers. I am curious if this psu is in response to some info obtained from NV or testing done with 8800GTX SLI...
1KW with single 72A rail.. One MUST be careful if s/he happens to open that PSU, I guess.
 

lopri

Elite Member
Jul 27, 2002
13,316
690
126
Originally posted by: coldpower27
Originally posted by: lopri
Here is a finding: According to the German site leak, the 8800 GTX will have an enormous theoretical pixel filling rate of 36800 MP/s.(!) Compared to that, 7950 GX2's theoretical fill rate is 16000 MP/s, and that of 7900 GTX is 10400 MP/s. Nothing really makes sense with regard to G80. NV must be doing a hell of job confusing the leakers.

Are you sure that compares to the Peak output fillrate or the Single/MultTexturing Fillrates of G71 Derivatives.

36800 would make much more sense again the 7950 GX2's 24000 and the 7900 GTX 15600.

Here is my reference. I'm not sure if the German site was talking about texels or pixels.

http://www.beyond3d.com/misc/chipcomp/?view=board&orderby=rdate&order=DESC&n=0

 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Important Update (10/23/06): What looks to be ASUS' retail box cover of upcoming GeForce 8800 GTX has been leaked via a chinese site here. This more or less confirms/summarizes the specifications of 8800 GTXs, or G80, that have been leaked for the past weeks. Interestingly, the leak suggests the price of 8800 GTX to be 540 USD and that of 8800 GTS to be 410 USD. The price information can be significant in that it can be the yardstick of relative performance. Below is the full quote of the box cover. (Thank you, Creig & nitromullet)

A few important things to note from the "30% thread" these pics and info came from:

1) It is speculated that the pricing indicated may be wholesale pricing
2) The top picture appears to be an 8800GTX and the bottom one is most likely an 8800GTS
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: lopri
Originally posted by: nitromullet
This is an interesting twist on the power consumption thing IMO...

http://www.pcpower.com/products/power_supplies/max-performance/

PCP&C no longer has a mutli-rail 1KW psu... They used to have three 1KW multi-rail models. They all had same rails, but different branding for Xfire and SLI and one with two 6-pin molexes instead of four. They now have a one with a single 72A 12v rail. IIRC PCP&C psu's are used by both NV and ATI at hardware conferences etc to show off the new gear, so I would assume that PCP&C has a relationship with both chipmakers. I am curious if this psu is in response to some info obtained from NV or testing done with 8800GTX SLI...
1KW with single 72A rail.. One MUST be careful if s/he happens to open that PSU, I guess.

A 250w can kill you...
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Originally posted by: Acanthus
Originally posted by: lopri
Originally posted by: nitromullet
This is an interesting twist on the power consumption thing IMO...

http://www.pcpower.com/products/power_supplies/max-performance/

PCP&C no longer has a mutli-rail 1KW psu... They used to have three 1KW multi-rail models. They all had same rails, but different branding for Xfire and SLI and one with two 6-pin molexes instead of four. They now have a one with a single 72A 12v rail. IIRC PCP&C psu's are used by both NV and ATI at hardware conferences etc to show off the new gear, so I would assume that PCP&C has a relationship with both chipmakers. I am curious if this psu is in response to some info obtained from NV or testing done with 8800GTX SLI...
1KW with single 72A rail.. One MUST be careful if s/he happens to open that PSU, I guess.

A 250w can kill you...

So can the easter bunny.

As for the renders of the human face... I am not impressed. Sure, it is more real than anything done before, but it looks freakish and doesn't strike me as "cool". Besides, it is only a head... What about the rest of the person? And what about the environment? This release seems lacklustre...
 

virtualrain

Member
Aug 7, 2005
158
0
0
I think the top picture at this Link

confirms that the GTX will be 26cm long (10.2").

I have a 7800 GTX that barely fits with my setup at 9" and so seeing that picture in profile, I compared it to a picture of a 7800 GTX in Photoshop. Compensating for the fact that they scrunched the picture width wise (notice the fan is not circular but oval) that card is 13.5% longer than my 7800 GTX which means it will be 26cm. Crap!
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Actually, as indicated in the first post of this thread...
8800GTX measures to 11", or 275mm, and 8800GTS to 9", or 225mm
The new pics don't seem to deviate from the earlier spy photos (at least not for the GTX), so I'm going to assume that the GTX is actually 11" long! The bottom picture appears to be an 8800GTS, which is the 9" card.