Register Reports R520 to launch with unified shaders Arch

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DRavisher

Senior member
Aug 3, 2005
202
0
0
Originally posted by: 5150Joker
Because the assumption is if they have a unified shader part, the card would be DX10 compliant rather than compatible. That's why I mentioned in my initial post that if the unified shader part were true by some remote chance and since DX 10 is essentially done, it would mean it had forward looking features.

And has been stated before, unified hardware is not a requirement in DX10. DX10 Unifies the software so that developers need not think about vertex and pixel shaders. The hardware still needs to know what it is doing (unlike the software). Unified Hardware has nothing at all to do with unifying the software. Unified hardware seeks to keep more of the hardware occupied at any one time, but it still needs to know wether it is doing vertex or pixel shading. A specialized hardware approach is every bit as DX10 compliant as a unified shading part.

Edit: And as others have said unified shaders are not in any way a feature. They are a way of doing shading. There is no clear cut advantage to either side, it will have to be tested to fint out which is better. I for one don't like generalizing the hardware (why not just make GFX card with a CPU then? Would be the most 'unified' pipeline ever made.)
 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
From here

Generally, while the API sets the feature set and defines how it can be used, it does not dictate to us how to integrate this feature set. There are some features in this upcoming API that we believe can be done more efficiently with a unified shader architecture, but such an architecture is not an absolute must have.
 

xtknight

Elite Member
Oct 15, 2004
12,974
0
71
Here's how I understand the unified shaders. I make no claims this is fully/technically correct (I don't think it is), but it might give some insight in to those that have absolutely no clue about it:

Shader processing is implemented in hardware and invoked by software APIs such as Direct3D by differing vertex and pixel functions. Right now when the graphics card receives the shader it takes it through either the pixel or vertex pipeline and applies it to pixels. A pixel pipeline is able to process solely pixel shader functions, likewise a vertex pipeline is able to process only vertex shaders. A unified shader will be able to process both at a small performance cost and gain at the same time. With the unified approach the pipelines won't have to wait on each other. From what I can tell from the arch diagrams, the pixel processing comes after vertex processing, so with unified there will just be one (slightly less specialized) pipeline.
 

DRavisher

Senior member
Aug 3, 2005
202
0
0
xtknight: I think you are probably pretty much correct. The thing is, the software Unified shaders make it so that the programmer no longer knows any difference between pixel and vertex shading. The hardware, unified or not, must still know the difference. The unified shader part needs to know the difference just as surely as the specialized part needs to know the difference. The percieved advantage of the unified part is that it can distribute the load better, since it will just tell who to do what, and can thus (theoretically) achieve 100% utilization, no matter how many percent of the calculation is pixel or vertex shading.
 

Ronin

Diamond Member
Mar 3, 2001
4,563
1
0
server.counter-strike.net
Turtle, you have some perception that anyone gives a damn what you say (and that includes AT 'management', which you seem to throw around quite heaviliy). We don't need to ask you wwybywb, because we already know.

Stop the plagarism, stop the banter (because you really can't keep up), and take responsibility for your own actions in this thread. All you've achieved here is confirming that you're clueless.

Points of note:
The R520 will not have unified shaders.
~80% of the information you've seen on the net is correct about R520 (based on the information I have, the core is correct, but the memory is not).
The piping architecture is slightly different than previous versions (I'm not talking about memory here).

 

Turtle 1

Banned
Sep 14, 2005
314
0
0
Originally posted by: DRavisher
Originally posted by: 5150Joker
Because the assumption is if they have a unified shader part, the card would be DX10 compliant rather than compatible. That's why I mentioned in my initial post that if the unified shader part were true by some remote chance and since DX 10 is essentially done, it would mean it had forward looking features.

And has been stated before, unified hardware is not a requirement in DX10. DX10 Unifies the software so that developers need not think about vertex and pixel shaders. The hardware still needs to know what it is doing (unlike the software). Unified Hardware has nothing at all to do with unifying the software. Unified hardware seeks to keep more of the hardware occupied at any one time, but it still needs to know wether it is doing vertex or pixel shading. A specialized hardware approach is every bit as DX10 compliant as a unified shading part.



Edit: And as others have said unified shaders are not in any way a feature. They are a way of doing shading. There is no clear cut advantage to either side, it will have to be tested to fint out which is better. I for one don't like generalizing the hardware (why not just make GFX card with a CPU then? Would be the most 'unified' pipeline ever made.)

THATS correct. We are just discussing what R520 might be like if the registers report is true.

 

5150Joker

Diamond Member
Feb 6, 2002
5,549
0
71
www.techinferno.com
Originally posted by: DRavisher
Originally posted by: 5150Joker
Because the assumption is if they have a unified shader part, the card would be DX10 compliant rather than compatible. That's why I mentioned in my initial post that if the unified shader part were true by some remote chance and since DX 10 is essentially done, it would mean it had forward looking features.

And has been stated before, unified hardware is not a requirement in DX10. DX10 Unifies the software so that developers need not think about vertex and pixel shaders. The hardware still needs to know what it is doing (unlike the software). Unified Hardware has nothing at all to do with unifying the software. Unified hardware seeks to keep more of the hardware occupied at any one time, but it still needs to know wether it is doing vertex or pixel shading. A specialized hardware approach is every bit as DX10 compliant as a unified shading part.

Edit: And as others have said unified shaders are not in any way a feature. They are a way of doing shading. There is no clear cut advantage to either side, it will have to be tested to fint out which is better. I for one don't like generalizing the hardware (why not just make GFX card with a CPU then? Would be the most 'unified' pipeline ever made.)


Listen, I don't see how you aren't able to follow what I'm saying since it's very clear. I already know what a unified shader architecture is and don't need a cut and paste explanation. The point I'm making is that if ATi did go through with a unified architecture for this card, then there is a good chance they made it DX 10 compliant which may include SM 4.0 support:

Windows Graphics Foundation will be the graphics subsystem of the Longhorn OS, expected in the market in 2006. The WGF2.0 includes the DirectX 10 or and also Shader model 4.0 support. ATI seems to anticipate WGF2.0 with the R5xx series of GPUs. But NVIDIA comments that the WGF2.0 is an API (software) and does not necessarily point out to the Unified-Shader concept.

source: http://www.cdrinfo.com/Sections/News/Details.aspx?NewsId=14458
 

M0RPH

Diamond Member
Dec 7, 2003
3,302
1
0
Originally posted by: Ronin
Turtle, you have some perception that anyone gives a damn what you say (and that includes AT 'management', which you seem to throw around quite heaviliy). We don't need to ask you wwybywb, because we already know.

Stop the plagarism, stop the banter (because you really can't keep up), and take responsibility for your own actions in this thread. All you've achieved here is confirming that you're clueless.

Points of note:
The R520 will not have unified shaders.
~80% of the information you've seen on the net is correct about R520 (based on the information I have, the core is correct, but the memory is not).
The piping architecture is slightly different than previous versions (I'm not talking about memory here).


Wow, we have an industry insider here. Doubtful.

Everyone reading this, please note that the above statements are OPINIONS, not FACT.

I don't believe that the R520 will have unified shaders either but I would never have the gall to state is as fact when I don't know the specs yet since they haven't been announced.
 

5150Joker

Diamond Member
Feb 6, 2002
5,549
0
71
www.techinferno.com
Originally posted by: M0RPH
Originally posted by: Ronin
Turtle, you have some perception that anyone gives a damn what you say (and that includes AT 'management', which you seem to throw around quite heaviliy). We don't need to ask you wwybywb, because we already know.

Stop the plagarism, stop the banter (because you really can't keep up), and take responsibility for your own actions in this thread. All you've achieved here is confirming that you're clueless.

Points of note:
The R520 will not have unified shaders.
~80% of the information you've seen on the net is correct about R520 (based on the information I have, the core is correct, but the memory is not).
The piping architecture is slightly different than previous versions (I'm not talking about memory here).


Wow, we have an industry insider here. Doubtful.

Everyone reading this, please note that the above statements are OPINIONS, not FACT.

I don't believe that the R520 will have unified shaders either but I would never have the gall to state is as fact when I don't know the specs yet since they haven't been announced.


Yeah I don't think it will have it either but we're discussing theoreticals here. Seems a lot of people have "insider" info. these days - I have Taiwanese AIB associates of my own. :)
 

M0RPH

Diamond Member
Dec 7, 2003
3,302
1
0
Originally posted by: DRavisher
xtknight: I think you are probably pretty much correct. The thing is, the software Unified shaders make it so that the programmer no longer knows any difference between pixel and vertex shading. The hardware, unified or not, must still know the difference. The unified shader part needs to know the difference just as surely as the specialized part needs to know the difference. The percieved advantage of the unified part is that it can distribute the load better, since it will just tell who to do what, and can thus (theoretically) achieve 100% utilization, no matter how many percent of the calculation is pixel or vertex shading.

I don't know if it's as simple as you make it sound.

NVIDIA will most likely provide the same instruction set for both vertex and pixel shaders in future GPUs but still use different hardware for both. That being said, in the very long run, NVIDIA may eventually move to a unified architecture.

Differing implementations aside, both NVIDIA and ATI's GPUs released alongside Longhorn will have to support Shader Model 4.0, requiring a unified instruction set across the shaders. That means the type of operations and the limits on what can be done with shaders will be the same for pixel and vertex shaders. From then on, the programmer won't have to think about pixel or vertex instructions, just shader instructions.

source


A GPU with unified shaders provides a unified instruction set across all the shaders. A nonunified GPU could probably mimic this, but at what kind of penalty? I have to think that there is soe kind of benefit to having both a unified shader API and GPU since that's what Microsoft seems to be promoting.

Also, many reports around the web say that Nvidia is moving to unified shaders in one of their upcoming chips (G80?). If the hardware makes no difference to the API, and Nvidia feels so strongly, like you, that specialized hardware is better than generalized, why are they giving in and joining the unified camp?
 

Turtle 1

Banned
Sep 14, 2005
314
0
0
Originally posted by: Ronin
Turtle, you have some perception that anyone gives a damn what you say (and that includes AT 'management', which you seem to throw around quite heaviliy). We don't need to ask you wwybywb, because we already know.

Stop the plagarism, stop the banter (because you really can't keep up), and take responsibility for your own actions in this thread. All you've achieved here is confirming that you're clueless.

Points of note:
The R520 will not have unified shaders.
~80% of the information you've seen on the net is correct about R520 (based on the information I have, the core is correct, but the memory is not).
The piping architecture is slightly different than previous versions (I'm not talking about memory here).


I am interested only in A good discussion that brings light on to this subject. It turning out to be a really good thread. Join in and have fun with it. Come on guys we seem to have the unified shader down pretty good . Work on the embedded memory. Google fast14 and Exponential Technogoly
 

Avalon

Diamond Member
Jul 16, 2001
7,571
178
106
Originally posted by: Turtle 1
At no time do I take credit for authoring that article . It comes from my employers archives and have full permission to use it.



http://xbox360.ign.com/articles/617/617951p3.html

Oh really? Then what's this?

I though my source is not qouted

So what's with the massive unquoted post? What's with your many unliked posts that are mere copy and pastes?

Your source I am directing your attention towards is not quoted, and not linked to, thus you are automatically taking all the credit for the text you posted, whether you intended to or not.

I am interested only in A good discussion that brings light on to this subject.

How do you expect to bring light to the "subject" of this thread? The card IS NOT OUT YET, so all that exists is rumor, insider gossip, some leaks, and expectations. You will not be bringing anything to light. ATI is going to be TELLING you in about 10 days, along with everyone else. If you want to discuss rumors, that's fine, but don't go around acting like you're uncovering all the factual secrets of a card that has not been released.
 

xtknight

Elite Member
Oct 15, 2004
12,974
0
71
Originally posted by: DRavisher
xtknight: I think you are probably pretty much correct. The thing is, the software Unified shaders make it so that the programmer no longer knows any difference between pixel and vertex shading. The hardware, unified or not, must still know the difference. The unified shader part needs to know the difference just as surely as the specialized part needs to know the difference. The percieved advantage of the unified part is that it can distribute the load better, since it will just tell who to do what, and can thus (theoretically) achieve 100% utilization, no matter how many percent of the calculation is pixel or vertex shading.

I don't think it makes it transparent to the developer. There will still be vertex and pixel processing. For example the software developer may put something like this (maybe not this direct).
XShader->Type=D3DShaderType.Vertex;

He will still have to specify points for geometric transformation and shaders for pixel transformation. The developer will still know what he's doing (hopefully). :)

I thought the only change was efficiency, where both pixel and vertex were processed in one pipeline, but not that pixel and vertex processing were evolving in to one "omnipotent" shader.

I'd be very surprised if I was wrong about this aspect (not because of my huge ego, but because I think it would be infeasible otherwise).
 

Duvie

Elite Member
Feb 5, 2001
16,215
0
71
Originally posted by: Turtle 1
Originally posted by: Duvie
Dude it would kept aloive without you.....Which is what I recommend...

I stated nothing that needs to be linked....You styill have 3 threads to go back to and support your other statements...Dont worry we are keeping tally....You are not very crdible cause you tuck and run from all the other questions of providing proof....

Nice day Turtelia...

I would like the mods to edit your name to that!!! Please..Belated B-day gift!!!

The links a gave in those threads had every bit of information in them .To back up everything I said.



Wrong!! And that is a fact I can prove!!! We disected each one of those links and the pathetic thing about it, it often in its own words refuted what you said....I mean, can you read??? or is it just the comprehension part you suck at???

In this thread you are doig fine...I just have to stand up a defend ppl you want proof from when they make statements when you haven't near ly half the crap you posted....

AT manangement will come down in my side, trust me.....I am sure they are well aware of the BS you have already posted...
 

Duvie

Elite Member
Feb 5, 2001
16,215
0
71
Originally posted by: Turtle 1
Thats exactly correct . Than the flamers jumped in to hijack the thread. This had all the earmarks of a great thread till it was hijacked! Joker I am still thinking if the OP post is correct. It will be SM3+ and not SM4 . But until we know if its unified shaders its all just smoke. If it is than the embedded memory and stuff is nothing more than a topic for those interested in ATI products to discuss. Also if R520 has 300-350 million transistors that would be inline with what the R500 has.



you are the OP moron!!! You mean the link of the article in your original post.....
 

DRavisher

Senior member
Aug 3, 2005
202
0
0
Originally posted by: xtknight
I don't think it makes it transparent to the developer. There will still be vertex and pixel processing. For example the software developer may put something like this (maybe not this direct). He will still have to specify points for geometric transformation and shaders for pixel transformation. The developer will still know what he's doing (hopefully). :)
XShader->Type=D3DShaderType.Vertex;
My impression was that the developer would no longer have to think about this at all. But I may of course be way off.

Originally posted by: xtknight
I thought the only change was efficiency, where both pixel and vertex were processed in one pipeline, but not that pixel and vertex processing were evolving in to one "omnipotent" shader.

I'd be very surprised if I was wrong about this aspect (not because of my huge ego, but because I think it would be infeasible otherwise).

The efficiency is what I think the hardware unified shader has to offer, which has nought to do with the API side of things.

Originally posted by: M0RPH
A GPU with unified shaders provides a unified instruction set across all the shaders. A nonunified GPU could probably mimic this, but at what kind of penalty? I have to think that there is soe kind of benefit to having both a unified shader API and GPU since that's what Microsoft seems to be promoting.

Also, many reports around the web say that Nvidia is moving to unified shaders in one of their upcoming chips (G80?). If the hardware makes no difference to the API, and Nvidia feels so strongly, like you, that specialized hardware is better than generalized, why are they giving in and joining the unified camp?

I think that the unified shader part will still have to do as much distinguishing between pixel and vertex shading as the specialized part.

Also, has M$ been promoting unified hardware shaders in any other context than Xbox 360? Of course they will advertise the goodness of the hardware in the xbox, but if they havent actually stated in other contexts that unified hardware is a must have for the imminent future, then this is more xbox marketing than anything else.

If indeed the G80 will be unified shading hardware then of course nvidia seems to agree with ATi on this point, but I think they actually want to wait for a smaller process so they can make the shader more complex before changing (just my guess, nothing more).

Disclaimer: I am a simpler nvidia fanboi with no education or degrees in any field connected with GPU design and such, so by all means feel free to not take me seriously.
 

Turtle 1

Banned
Sep 14, 2005
314
0
0
Thats correct. and you guys think Nvidia is going to pull a rabbit out of its hat ?its also a repeat of the unauthored text. So you new that Fast14 was part of the new R500/R520 GPU'S But not everyone else did.


Speculation only