G80 won't be unified shader architecture

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
Originally posted by: Acanthus
Originally posted by: jim1976
Originally posted by: Acanthus
Originally posted by: Steelski
Originally posted by: RussianSensation
Originally posted by: nts
Interesting. If true then they probably took the TMUs out of the Pixel pipes and are sharing them between the vertex and pixel units.

ATi is definately ahead here then, hopefully won't turn into another FX generation but it looks like they are going for insane clocks for this chip (48 pixel pipes seems low).

What's the R600 rumor, 64 unified shaders + 16/24/32 texture units?

Although at first 48 pixel pipes seems low, you have to remember these are real pipelines not pixel shaders as in R580. 48 will put it at 2x the pipelines of 7800GTX which is a substantial improvement (since when 2x the performance for new generation is now low?) Also, it's difficult to say how comparable in power a unified shader is to a dedicated shader in your R600 example.

What is interesting is which company is going to catch up first: 1) Will ATI will revamp its OpenGL drivers taking away Nvidia's lustre in those types of games? or 2) Will Nvidia produce a card with more efficient shaders and AA algorithms and take away ATI's shader crown in games like FEAR?

I dont really think you know what you are propositioning with saying that the next chip will have 48 normal pipelines. You are talking about a chip that is much much larger. and the fact that 0.65 processes do not exsist in the GPU world yet makes this a very very very expensive card by default. even with a 0.80 process this is still so huge its not even worth thinking about. most likley they will have 24 real pipes and 2 alu's per pipe. what currant games indicate is that there is no need for more Texture units than the currant 24 so i dont think Nvidia will be silly enough to implement 48 of them at great cost.
What this news really indicates is that ATI are very much in the driving seat next round and its theirs to looooz.
What could also be seen is that the round after G80 and R600 will most likley be unified shaders.....Where will Nvidia be there. 2.5 generations behind with that technology when you consider that ATI will have the R500,R600 and the R600 refresh.
Another thing to take on board is that it pays to be good to Microsoft.

So with this massive paragraph you posted. Do you honestly believe TSMC and UMC wont be at 65nm in 6 months? come on now. (which would allow double the transistors in the same die size, at a higher clockspeed)


m8 do not underestimate the difficulty of going to 65nm... Mark my words.. I'm not saying that it is not possible I'm saying it is difficult ;)

The research is already done elsewhere ;) all they have to do is retool.


It's one thing to have the research and another to apply it on gpu marchitecture m8.. ;)
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: jim1976
Originally posted by: Acanthus
Originally posted by: jim1976
Originally posted by: Acanthus
Originally posted by: Steelski
Originally posted by: RussianSensation
Originally posted by: nts
Interesting. If true then they probably took the TMUs out of the Pixel pipes and are sharing them between the vertex and pixel units.

ATi is definately ahead here then, hopefully won't turn into another FX generation but it looks like they are going for insane clocks for this chip (48 pixel pipes seems low).

What's the R600 rumor, 64 unified shaders + 16/24/32 texture units?

Although at first 48 pixel pipes seems low, you have to remember these are real pipelines not pixel shaders as in R580. 48 will put it at 2x the pipelines of 7800GTX which is a substantial improvement (since when 2x the performance for new generation is now low?) Also, it's difficult to say how comparable in power a unified shader is to a dedicated shader in your R600 example.

What is interesting is which company is going to catch up first: 1) Will ATI will revamp its OpenGL drivers taking away Nvidia's lustre in those types of games? or 2) Will Nvidia produce a card with more efficient shaders and AA algorithms and take away ATI's shader crown in games like FEAR?

I dont really think you know what you are propositioning with saying that the next chip will have 48 normal pipelines. You are talking about a chip that is much much larger. and the fact that 0.65 processes do not exsist in the GPU world yet makes this a very very very expensive card by default. even with a 0.80 process this is still so huge its not even worth thinking about. most likley they will have 24 real pipes and 2 alu's per pipe. what currant games indicate is that there is no need for more Texture units than the currant 24 so i dont think Nvidia will be silly enough to implement 48 of them at great cost.
What this news really indicates is that ATI are very much in the driving seat next round and its theirs to looooz.
What could also be seen is that the round after G80 and R600 will most likley be unified shaders.....Where will Nvidia be there. 2.5 generations behind with that technology when you consider that ATI will have the R500,R600 and the R600 refresh.
Another thing to take on board is that it pays to be good to Microsoft.

So with this massive paragraph you posted. Do you honestly believe TSMC and UMC wont be at 65nm in 6 months? come on now. (which would allow double the transistors in the same die size, at a higher clockspeed)


m8 do not underestimate the difficulty of going to 65nm... Mark my words.. I'm not saying that it is not possible I'm saying it is difficult ;)

The research is already done elsewhere ;) all they have to do is retool.


It's one thing to have the research and another to apply it on gpu marchitecture m8.. ;)


Gee i didnt know that, i thought the gpus designed their own layout. Through osmosis.
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,396
8,559
126
didn't tsmc and usmc just get .09 working a few months ago?

just as intel had .09 working a year ahead of everyone else, i think they're similarly ahead at .065. the problems encountered at .09 are exponentially magnified going to .065, iirc.
 

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
Originally posted by: Acanthus
Originally posted by: jim1976
Originally posted by: Acanthus
Originally posted by: jim1976
Originally posted by: Acanthus
Originally posted by: Steelski
Originally posted by: RussianSensation
Originally posted by: nts
Interesting. If true then they probably took the TMUs out of the Pixel pipes and are sharing them between the vertex and pixel units.

ATi is definately ahead here then, hopefully won't turn into another FX generation but it looks like they are going for insane clocks for this chip (48 pixel pipes seems low).

What's the R600 rumor, 64 unified shaders + 16/24/32 texture units?

Although at first 48 pixel pipes seems low, you have to remember these are real pipelines not pixel shaders as in R580. 48 will put it at 2x the pipelines of 7800GTX which is a substantial improvement (since when 2x the performance for new generation is now low?) Also, it's difficult to say how comparable in power a unified shader is to a dedicated shader in your R600 example.

What is interesting is which company is going to catch up first: 1) Will ATI will revamp its OpenGL drivers taking away Nvidia's lustre in those types of games? or 2) Will Nvidia produce a card with more efficient shaders and AA algorithms and take away ATI's shader crown in games like FEAR?

I dont really think you know what you are propositioning with saying that the next chip will have 48 normal pipelines. You are talking about a chip that is much much larger. and the fact that 0.65 processes do not exsist in the GPU world yet makes this a very very very expensive card by default. even with a 0.80 process this is still so huge its not even worth thinking about. most likley they will have 24 real pipes and 2 alu's per pipe. what currant games indicate is that there is no need for more Texture units than the currant 24 so i dont think Nvidia will be silly enough to implement 48 of them at great cost.
What this news really indicates is that ATI are very much in the driving seat next round and its theirs to looooz.
What could also be seen is that the round after G80 and R600 will most likley be unified shaders.....Where will Nvidia be there. 2.5 generations behind with that technology when you consider that ATI will have the R500,R600 and the R600 refresh.
Another thing to take on board is that it pays to be good to Microsoft.

So with this massive paragraph you posted. Do you honestly believe TSMC and UMC wont be at 65nm in 6 months? come on now. (which would allow double the transistors in the same die size, at a higher clockspeed)


m8 do not underestimate the difficulty of going to 65nm... Mark my words.. I'm not saying that it is not possible I'm saying it is difficult ;)

The research is already done elsewhere ;) all they have to do is retool.


It's one thing to have the research and another to apply it on gpu marchitecture m8.. ;)


Gee i didnt know that, i thought the gpus designed their own layout. Through osmosis.


Yeah it's an easy process too.. They have so much time till Q3/Q4 2006 :disgust:
So what you're implying is that transition to 65nm will be an easy step? K then let's wait and see ...
 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
Originally posted by: RussianSensation
Originally posted by: nts
Interesting. If true then they probably took the TMUs out of the Pixel pipes and are sharing them between the vertex and pixel units.

ATi is definately ahead here then, hopefully won't turn into another FX generation but it looks like they are going for insane clocks for this chip (48 pixel pipes seems low).

What's the R600 rumor, 64 unified shaders + 16/24/32 texture units?

Although at first 48 pixel pipes seems low, you have to remember these are real pipelines not pixel shaders as in R580. 48 will put it at 2x the pipelines of 7800GTX which is a substantial improvement (since when 2x the performance for new generation is now low?)
Err, let me quote the article itself:
Nvidia?s code-named G80 graphics processing unit (GPU) will incorporate 48 pixel shader processors and an unknown number of vertex shader processors, some unofficial sources said.
That in itself is enough indication that they are not referring to conventional (traditional) pipeline that Nvidia has now. I'm not even touching your 2x improvement argument.


And if G80 is indeed not unified and more of a hybrid then I'm inclined to believe that it is much closer to release than we are expecting it to. That makes G71's lifespan wierdly even smaller...
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: munky
Whoah, back up a sec! Who ever said DX10 required unified shaders? Unified shaders is a hardware feature, not a software or an API feature, and it could still be DX10 compliant as long as it supports the required API features even without unified shaders. That being said, it does bring up a question of how effectively it will support the features without unified shaders, and the worst case scenario is it will support it just like the FX series supported DX9. But, it does fit one more piece in the puzzle, and further hints that the g71 will not be a monster card, but rather a transition card, and that NV will try to get the g80 out earlier than Ati can bring out the r600.

If I may further theorize...my guess would be that the 7900GTX will LOSE to the X1900XTX if they are in fact in a huge rush to get G80 to market.

I have a feeling that they will have a true unified competitor for the R600 right about when it comes out. Maybe they just want to get some life out of this research project of theirs. :)
 

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
Originally posted by: SickBeast

I have a feeling that they will have a true unified competitor for the R600 right about when it comes out. Maybe they just want to get some life out of this research project of theirs. :)

M8 ,G80 = geometry + vertex shaders unified, PS separate, "G90" complete USC, no ROPs...

G90 won't be here for a long time. Also there's no possibility G80 will be completely USC as R600. The bet is who is making the right prediction? Nvidia for predicting that games will present better results w/o complete USC for the time being or ATI for predecting that they will? It's a big bet for the games performance(probably the biggest apart from specs that by themselves means jack..) . USC will eventually be the absolute trend in the future but who is right in this transitional stage? :)

 

xtknight

Elite Member
Oct 15, 2004
12,974
0
71
Originally posted by: crazydingo
That in itself is enough indication that they are not referring to conventional (traditional) pipeline that Nvidia has now. I'm not even touching your 2x improvement argument.

What are conventional pipelines?
 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
Originally posted by: xtknight
Originally posted by: crazydingo
That in itself is enough indication that they are not referring to conventional (traditional) pipeline that Nvidia has now. I'm not even touching your 2x improvement argument.

What are conventional pipelines?
The ones Nvidia is using right now and ATI also (X1300).
 

xtknight

Elite Member
Oct 15, 2004
12,974
0
71
Originally posted by: crazydingo
Originally posted by: xtknight
Originally posted by: crazydingo
That in itself is enough indication that they are not referring to conventional (traditional) pipeline that Nvidia has now. I'm not even touching your 2x improvement argument.

What are conventional pipelines?
The ones Nvidia is using right now and ATI also (X1300).

As in render outputs, pixel shaders, texture units, or what?
 

Steelski

Senior member
Feb 16, 2005
700
0
0
Originally posted by: xtknight
Originally posted by: crazydingo
Originally posted by: xtknight
Originally posted by: crazydingo
That in itself is enough indication that they are not referring to conventional (traditional) pipeline that Nvidia has now. I'm not even touching your 2x improvement argument.

What are conventional pipelines?
The ones Nvidia is using right now and ATI also (X1300).

As in render outputs, pixel shaders, texture units, or what?

i think it refers to one texture unit and one shader unit per pass....or something like that.
 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
Originally posted by: Steelski
Originally posted by: xtknight
Originally posted by: crazydingo
Originally posted by: xtknight
What are conventional pipelines?
The ones Nvidia is using right now and ATI also (X1300).

As in render outputs, pixel shaders, texture units, or what?

i think it refers to one texture unit and one shader unit per pass....or something like that.
^ ^ ^ ^ ^
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
The fact that it will be about that behind in experience with Unified shaders is not a small thing but i know its not the same as saying it will be 2.5 gens behind in performance. but considering that ATI is likley to have provided many unified chips before Nvidia shows one makes their unified shader program with little experience whilst ATI would have plenty to make an efficient chip. Can you deny that?

Two things- one you are assuming that unified shaders are better- at a hardware level nothing supports this. Two you assume that unified shaders are something that requires a lot of experience with to get right- why? nVidia's FIRST attempt at AF was much better then what they are producing right now. In terms of computational ability the ALUs for unified shaders are simpler then the straight fragment shaders that nV and ATi are using now.

Moving to unified shaders removes some of the specialization from GPUs- following that trend to its logical extreme Intel and AMD are far ahead of ATi or nVidia as their CPUs are far more unified in architecture for graphics ops then either of the big two GPU makers are right now or have on the drawing board.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: crazydingo
Originally posted by: xtknight
Originally posted by: crazydingo
That in itself is enough indication that they are not referring to conventional (traditional) pipeline that Nvidia has now. I'm not even touching your 2x improvement argument.

What are conventional pipelines?
The ones Nvidia is using right now and ATI also (X1300).

Maybe you guys should leave the technical details for me to explain???:)

The x1k cards already do not use a traditional pipeline, beause the texture units are separate from the pixel shader units. Even on the x1300, there are 4 PS, but they are not tied to any 1 particular TMU, so there ar no traditional pipelines on the x1k cards. The g80 - I'm not sure what kind of pipes we're looking at, but if it's not a unified shader gpu, then it will have separate vertex shaders and separate pixel shaders. There's nothing untraditional or unified about that design, it's the same design gpu's have been using for years. Vertex shaders have always performed ops on the geometry, and pixel shaders combined with texture units have worked on the fragment data. It's the exact same basic design principle that goes all the way back to the gf3 and the radeon 8500, there's nothing revolutionary or new about it.
 

xtknight

Elite Member
Oct 15, 2004
12,974
0
71
Ah...you guys mean like this:
http://www.hardwarezone.com/articles/view.php?cid=1&id=1808
"Basically a pixel pipeline consists of a pixel shader processor, a texture mapping unit (TMU) and a raster operator unit (ROP)."

"Conventional pipeline" = one pixel shader processor + one texture mapping unit + one raster operator unit
The ATI Radeon X1800 XT is set up like that (16 traditional pipelines), but the X1900 XT is lopsided on the pixel shader side (so you don't consider the X1900 XT to have traditional pipelines anymore?) The 7800GTXs are also not considered to contain traditional pipelines?
 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
Originally posted by: xtknight
Ah...you guys mean like this:
http://www.hardwarezone.com/articles/view.php?cid=1&id=1808
"Basically a pixel pipeline consists of a pixel shader processor, a texture mapping unit (TMU) and a raster operator unit (ROP)."

"Conventional pipeline" = one pixel shader processor + one texture mapping unit + one raster operator unit
The ATI Radeon X1800 XT is set up like that (16 traditional pipelines), but the X1900 XT is lopsided on the pixel shader side (so you don't consider the X1900 XT to have traditional pipelines anymore?) The 7800GTXs are also not considered to contain traditional pipelines?
Correct. Though even the X1900 is tradional in the sense that it is not unified however its architecture is different from what we have become familiar with over the past years.

My point being the article mentions "48 shader processors" and not pipelines.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: xtknight
Ah...you guys mean like this:
http://www.hardwarezone.com/articles/view.php?cid=1&id=1808
"Basically a pixel pipeline consists of a pixel shader processor, a texture mapping unit (TMU) and a raster operator unit (ROP)."

"Conventional pipeline" = one pixel shader processor + one texture mapping unit + one raster operator unit
The ATI Radeon X1800 XT is set up like that (16 traditional pipelines), but the X1900 XT is lopsided on the pixel shader side (so you don't consider the X1900 XT to have traditional pipelines anymore?) The 7800GTXs are also not considered to contain traditional pipelines?

It goes even further than that. While all new cards have a separate array of ROP's, the traditional pipeline in a modern card is made up of a pixel shader and a texture unit. They both work together and make up one pixel pipe. The x1k cards, however, no longer have any traditional pipes at all. There's an array of pixel shaders, and there's a completely separate array of texture units. The 2 arrays communicate via the thread scheduler logic and an array of registers, and no pixel shader is tied to any 1 particular texture unit. That's why I no longer refer to pipes for those cards, because pipes as you know them do not exist on the x1k cards.
 

Ronin

Diamond Member
Mar 3, 2001
4,563
1
0
server.counter-strike.net
Folks, let me let you in on a tidbit of information.

MS hasn't even come close to finalizing DX10. If they don't have the information to provide to the manufacturers, they can't make cards compliant, plain and simple.

You're balking about things that even MS can't provide yet.

Let me let you in on another tidbit. MS wants to get Vista out not in Q4 2006, but Q3 2006. Why? Because most of the proprietary computers are purchased just before the school year starts, not during Christmas. It stands to reason that if they can't launch in Q3, they're going to postpone to fiscal 2007.

DX10 hardware means nothing right now, because there's nothing to manufacture around it. Take the information for what it's worth.