The New and Improved "G80 Stuff"

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
And to the people talking about how it "might not" be dual core.

The leaked info says that it isnt "traditional" dual core.

Which means it isnt a pair of twin GPUs that are identical.

There are 2 cores, they are not the same.
 

Dethfrumbelo

Golden Member
Nov 16, 2004
1,499
0
0
Originally posted by: Acanthus
And to the people talking about how it "might not" be dual core.

The leaked info says that it isnt "traditional" dual core.

Which means it isnt a pair of twin GPUs that are identical.

There are 2 cores, they are not the same.

I guess both the GTX and GTS are using a base 256 bit bus, with an additional 64 and 128 bit bus, respectively, being dedicated to AA/HDR/post-processing.

Maybe the primary core works off the 256 bit bus doing all the geometry/texture/shader work while there's a smaller secondary GPU which does the AA/HDR/etc (which may help to explain the "free" AA claim).


 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Dethfrumbelo
Originally posted by: Acanthus
And to the people talking about how it "might not" be dual core.

The leaked info says that it isnt "traditional" dual core.

Which means it isnt a pair of twin GPUs that are identical.

There are 2 cores, they are not the same.

I guess both the GTX and GTS are using a base 256 bit bus, with an additional 64 and 128 bit bus, respectively, being dedicated to AA/HDR/post-processing.

Maybe the primary core works off the 256 bit bus doing all the geometry/texture/shader work while there's a smaller secondary GPU which does the AA/HDR/etc (which may help to explain the "free" AA claim).

It could be any number of things, even a dedicated physics chip.

Or all of the shading units...

Who knows.
 

lopri

Elite Member
Jul 27, 2002
13,314
690
126
Assigning that much of sillicon to dedicated physics processing wouldn't make any sense to me. What G80 should excel at is the graphical processing power. What I've heard is that there is a 'layer' in the sillicon that helps running physics API (Havok) but I wouldn't expect any more than that. Chances are, if they dedicate that much die space to physics, it'll excel neither in graphics nor physics. Especially considering that they're still utilizing 90nm. It could do well in a future game that utilizes that portion of sillicon, but what about current games? It will lose out to a GPU that uses all the sillicon real-estate for graphics processing not to mention to a separate GPU+PPU configuration.

Another reason is that such design will be near impossible to be scaled down for med to low budget section. Not only that, it'd mean wasting lots of dice that couldn't meet the specs.

This isn't a strong rebuttal to GPU+PPU design in general. I wanted to point out a point of view where we can speculate what the final G80 would look like. There are many things we can consider.
 

lopri

Elite Member
Jul 27, 2002
13,314
690
126
It is also a reason I don't believe a speculation like '96 shaders total = 48 DX9 shaders + 48 DX10 shaders'.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
You guys have to realise GPUs are already "dual core" "quad core". So this nonsense about having "dual core" GPUs needs to stop. In the GPu world, its called quads (G71 for example has 6 quads with 4 pixel piplines per each quad with a total of 24 pixel pipelines).

Specs about G80 is already hinting at these "stream processors" which awfully sound alot like "quads". As in seperate processors with its owns shaders/ALus (that can do a wide variety of different function) and TMUs/ROPs that are also interconnected with other "stream processors". These stream processors can also have its own dedicated memory, hence the reason we see "shader clocks" and "Core clock" being different, while seeing a odd number for its memory interface.

 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
A pixel shader quad in a gpu is usually a SIMD engine (just like 3DNow!/SSE/altivec). SIMD engines are streaming processors.

a 6200 has one quad/simd block, 6600 has two, 6800GT/U has 4 and so on. Each quad/simd engine has 4 channels inside it processing the data.

This way of doing things has been around in recognisable form since the original GeForce.

The complexity and capabilities of the SIMD engine (the type and number of ALU's in it) is what usually changes from generation to generation.

Some GPU's also use a MIMD architecture for the vertex engine. This is the main reason *IMO* why nvidia has chosen not to fully unify G80's design (vertex and geometry shaders benefit from MIMD, pixel shaders use SIMD) - it's hard (at the moment) to make a MIMD processor that can perform as well as SIMD one can). This will change over time as money and resources are thrown at the problem.
 

Aikouka

Lifer
Nov 27, 2001
30,383
912
126
Originally posted by: TheSlamma
I hardly saw a difference when I went from my P4 1.8Ghz 400bus to my P4 3.0ghz 800bus. But saw a massive difference when I went from my 5900 to my 6800.

That's because you went from a horrid GeForce FX to a much better GeForce 6 series. Most people try to forget about the FX ;).
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
The New Graphics - A Tale of Direct X 10
From what we hear, the new [nvidia] card might not adopt the Microsoft graphics standard of a true Direct 3D 10 engine with a unified shader architecture. Unified shader units can be changed via a command change from a vertex to geometry or pixel shader as the need arises. This allows the graphics processor to put more horsepower where it needs it. We would not put it past Nvidia engineers to keep a fixed pipeline structure. Why not? They have kept the traditional pattern for all of their cards. It was ATI that deviated and fractured the "pipeline" view of rendering; the advent of the Radeon X1000 introduced the threaded view of instructions and higher concentrations of highly programmable pixel shaders, to accomplish tasks beyond the "traditional" approach to image rendering.

One thing is for sure; ATI is keeping the concept of the fragmented pipeline and should have unified and highly programmable shaders. We have heard about large cards - like ones 12" long that will require new system chassis designs to hold them - and massive power requirements to make them run.
 

Regs

Lifer
Aug 9, 2002
16,666
21
81
How are these things going to fit inside of an ATX? My current 7800gt is about 1/4 away from a HDD molex connector.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Originally posted by: apoppin
The New Graphics - A Tale of Direct X 10
From what we hear, the new [nvidia] card might not adopt the Microsoft graphics standard of a true Direct 3D 10 engine with a unified shader architecture. Unified shader units can be changed via a command change from a vertex to geometry or pixel shader as the need arises. This allows the graphics processor to put more horsepower where it needs it. We would not put it past Nvidia engineers to keep a fixed pipeline structure. Why not? They have kept the traditional pattern for all of their cards. It was ATI that deviated and fractured the "pipeline" view of rendering; the advent of the Radeon X1000 introduced the threaded view of instructions and higher concentrations of highly programmable pixel shaders, to accomplish tasks beyond the "traditional" approach to image rendering.

One thing is for sure; ATI is keeping the concept of the fragmented pipeline and should have unified and highly programmable shaders. We have heard about large cards - like ones 12" long that will require new system chassis designs to hold them - and massive power requirements to make them run.

Yes, I've seen this discussed before. Also you have to keep the source in mind.

Basically, as I said above the pixel shader quads will have to move to a MIMD architecture for this to happen IMO. If nvidia has managed to pull this off *and* keep the speed of a simd pixel shader quad, I'll be mighty impressed indeed. However you need to bear in mind David Kirk's comments about going unified "when it makes sense to". So I'm still sceptical about what the Inquirer is claiming. Don't have a problem with VS/GS unification, just don't see the PS also being unified this time round.

If it does turn out that all VS/GS/PS are all unified then I'll bet that the associated performance costs to the PS are what would cause such a design to fall behind r600 (there is a rumor r600 will be substantially faster than g80).
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Quads dont really exsist with the latest architectural designs.

At least not in the traditional sense.

When people say "dual core GPUs" they are referring to 2 GPUs on one package, not just dual GPUs in a single core. (at least the people that know wtf they are talking about anyway).
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Regs
How are these things going to fit inside of an ATX? My current 7800gt is about 1/4 away from a HDD molex connector.

The referred sizes are engineering samples, retail cards are always smaller.
 

Regs

Lifer
Aug 9, 2002
16,666
21
81
Originally posted by: Acanthus
Originally posted by: Regs
How are these things going to fit inside of an ATX? My current 7800gt is about 1/4 away from a HDD molex connector.

The referred sizes are engineering samples, retail cards are always smaller.

hm.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Originally posted by: Acanthus
Quads dont really exsist with the latest architectural designs.

At least not in the traditional sense.

When people say "dual core GPUs" they are referring to 2 GPUs on one package, not just dual GPUs in a single core. (at least the people that know wtf they are talking about anyway).

I find that rather hard to believe.

It's a large part of how GPU's acheive the parallelism they do - Single Instruction, Multiple Data (In a quads case 1 instruction affects 4 pixels at once).
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Gstanfor
Originally posted by: Acanthus
Quads dont really exsist with the latest architectural designs.

At least not in the traditional sense.

When people say "dual core GPUs" they are referring to 2 GPUs on one package, not just dual GPUs in a single core. (at least the people that know wtf they are talking about anyway).

I find that rather hard to believe.

It's a large part of how GPU's acheive the parallelism they do - Single Instruction, Multiple Data (In a quads case 1 instruction affects 4 pixels at once).

And how many shaders are on each quad when its dynamic?

Exactly.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
You'll never IMO find a GPU dealing with less than a quad's worth of data at a time (even if there are less than 4 pixels to render in the current triangle).

The concept has existed since before there were physically quads existing in the hardware (nv5 - looping of the pipelines).
 

Elfear

Diamond Member
May 30, 2004
7,167
824
126
Some interesting info I found on B3D.

"Here are some fairly reliable numbers from the same source who was spot on with previous numbers (eg RV570, RV560, G71) before they got launched:

..............3DM05 / 3DM06
8800GTX 16909 / 11843
8800GTS 15233 / 10071
7950GX2 12931 / 8781
1950XTX 11866 / 7007
1950PRO 10044 / 5600

No specs of the system were given, but all the cards ran on the same system."

Source post #1487
 

lopri

Elite Member
Jul 27, 2002
13,314
690
126
Those numbers make sense to me. Should I update the first post with the info?
 

biostud

Lifer
Feb 27, 2003
19,963
7,055
136
Originally posted by: Acanthus
Quads dont really exsist with the latest architectural designs.

At least not in the traditional sense.

When people say "dual core GPUs" they are referring to 2 GPUs on one package, not just dual GPUs in a single core. (at least the people that know wtf they are talking about anyway).

If they aren't dual GPUs on a single die it's not true dualcore...........



.....as AMD would put it :p
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Thats only vendor overclocking being banned if I'm reading that correctly (is there a correct way to read an INQ article?) I imagine endusers will still be able to overclock.
 

Elfear

Diamond Member
May 30, 2004
7,167
824
126
Originally posted by: lopri
Those numbers make sense to me. Should I update the first post with the info?

They sound about right to me and come from a semi-reliable source. I'd go ahead and post them since almost none of the info we have is extremely solid anyway.


Originally posted by: Gstanfor
Thats only vendor overclocking being banned if I'm reading that correctly (is there a correct way to read an INQ article?) I imagine endusers will still be able to overclock.

That's what I read too.