• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

AMD's Richard Huddy on DirectX 11, Eyefinity, and the competition

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Seero

Golden Member
Nov 4, 2009
1,456
0
0
I already explained where you misunderstand things..

Tessellation is not performance neutral, it takes up a lot of shader power to do. However, what is said in that white paper is that the performance hit between rendering an advanced object into, say, a million polygons is greater than using tessellation to get the same effect.

Tessellation has ALWAYS hurt performance.. the whole point is that it does not hurt it as much as rendering equivalently huge meshes... Nothing is free, you need to look up some of this stuff more before you spout off on it.
I accept that. I never say tessellation itself is a gimmick, I said the tessellation demo is a gimmick. Mesh can be pre-tessellated, and thus producing same graphics. It however is capable of producing graphics without the need of passing the entire pre-tessellated mesh through the pipeline while leaving the actual tessellation is done within the video card, which resulting a performance gain.

The problem is, to support both Dx10 and Dx11, two sets of arts are needed, which defeats its purpose. It will only be good if most hardware support Dx11, which is going to be a while. I give it 3 years, am I not generous?
 

Schmide

Diamond Member
Mar 7, 2002
5,745
1,036
126
Yeah, i am saying that hardware tessellator isn't new. If anything is new, it is the API of Dx11.

Then you're wrong. It is now holds a space in the rendering pipeline and a dedicated unit on the chip. It no longer needs a separate pass on the vertex pipeline to do its calculations.

By tax I mean the decrease in performance, not the tax from government. I thought I don't need to explain that, but I have overestimated something.

Dig a bit into tessellation from ATI and you will know that it was not suppose to tax performance. Now in case you will say "Seero said it doesn't make it true", there is a document from AMD.
http://developer.amd.com/gpu_assets/Real-Time_Tessellation_on_GPU.pdf

It doesn't tax the gemology setup pipeline. (Edit that much) It does however create a extra burden on the shaders as there are more lighting calculations to be made.

By tax relief I mean some code change will ease the performance hit if we are lucky. If not some hardware change may required. If we are very unlucky, which is most likely, structural change is required.

I understand what you meant. It was a poor use of the term.

Please read my post carefully. The so call "enhance graphics" is by making Dx10 graphics look worst than Dx9? Sorry, but tessellation demo is this lame.

I think it looks amazing. You have a weird skew on the world. Is that what the world looks like through bias colored glasses?

Is it hard to read what you quote?
Nvidia patented "Integrated tessellator in a graphics processing unit"
AMD patented "Rapid graphics bit mapping circuit and method"

The earlier AMD patent uses Breshenham calculations to remap some form of geometry. Although it is being applied to a 2d bitmap, it could be considered tessellation. Thus the reason the nVidia patent cited it as a technology it was building upon. Again all this is moot as it is cross licensed between all the parties for the betterment of the industry.
 
Last edited:

Seero

Golden Member
Nov 4, 2009
1,456
0
0
...I think it looks amazing. You have a weird skew on the world. Is that what the world looks like through bias colored glasses?
...
Dx 11 does look good, but if you look at the swapping from Dx11 to Dx10, then you should realized that texture does exist in Dx10, even Dx9. Stairs in WoW is not flat, so why stairs in those demo so flat? If you ain't wearing biased colored glasses, then you should realized that Dx10 looks much better than what was shown.

Again, re-read the original post and you should see that putting ATI down isn't my intention. How people understand it is another story.
 

Schmide

Diamond Member
Mar 7, 2002
5,745
1,036
126
Dx 11 does look good, but if you look at the swapping from Dx11 to Dx10, then you should realized that texture does exist in Dx10, even Dx9. Stairs in WoW is not flat, so why stairs in those demo so flat? If you ain't wearing biased colored glasses, then you should realized that Dx10 looks much better than what was shown.

Again, re-read the original post and you should see that putting ATI down isn't my intention. How people understand it is another story.

I think you're misinterpreting the demo in general. First you're comparing it to a refined game. Second you can't seem to see the forest for the trees. The whole idea of the demo is to show how the effect can enhance a certain level of geometry. It's just a primitive, the demo is not the end all for the effect.

Think about the first direct x 9 demo you saw then compare that to WoW. Exactly.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
I think you're misinterpreting the demo in general. First you're comparing it to a refined game. Second you can't seem to see the forest for the trees. The whole idea of the demo is to show how the effect can enhance a certain level of geometry. It's just a primitive, the demo is not the end all for the effect.

Think about the first direct x 9 demo you saw then compare that to WoW. Exactly.
From the document above, (Edit) hardware tessellation is not about better graphics, but better performance. I have yet seen it. Unigine benchmarks showed that Dx11 don't work the way it described in the document.

I am not saying (Edit) hardware tessellation is a hype, but clearly things ain't working the way it suppose to be and those graphics can really be done in Dx10.

Now of course you may say it hurts more if it was done in Dx10, but I have yet seen a prove of it. The only real way to show the effects of Dx11 is to show the same scene with the same quality, while allowing the user to see the wire frames. The one done in Dx11 should resulted in a higher FPS count as well as a lower memory usage. Also do not forget, Dx11 is suppose to utilize more cores, meaning performance increase, I don't see any of that. Yet, so many people are saying Dx11 rocks... I must be biased.
http://www.pcgameshardware.com/aid,...DX-11-Update-Radeon-HD-5970-results/Practice/
Same game, engine, card, driver and done by the same person.
5970@Dx9 128 FPS, 5970@Dx11 75 FPS
So ...
 
Last edited:

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Don't stress much yourself to explain how DX11 works to Seero, lots of members here tried, and yet he failed to understand. Seems that he's jeallous for the fact that his favorite company lacks of any DX11 solution which is plain wrong because if nVidia had a Dx11 solution today, the prices would go down. Competition is always good, get that clear.

Tessellation is to increase image quality with less impact in performance compared to a high polygon count model, period. Not to improve performance alone because most games todays aren't geometry bound, but shader, texture or CPU bound.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
Don't stress much yourself to explain how DX11 works to Seero, lots of members here tried, and yet he failed to understand. Seems that he's jeallous for the fact that his favorite company lacks of any DX11 solution which is plain wrong because if nVidia had a Dx11 solution today, the prices would go down. Competition is always good, get that clear.

Tessellation is to increase image quality with less impact in performance compared to a high polygon count model, period. Not to improve performance alone because most games todays aren't geometry bound, but shader, texture or CPU bound.
Sorry, but I fail to see the input from your post, and what you said contradicts with the document from AMD about hardware tessellation.

Just because Nvidia will support it, doesn't make it better. Unless Nvidia's implementation to tessellation actually do what it suppose to. However, if they do, they would have demonstrated the improvement at CES.

Somehow I have a feeling that you will treat me as a friend when GT100 comes out. But seriously, if you think your new card rocks, then you got what you paid for. Unless you are buying my one, there is no need to keep repeating things that it come with that you really have no clue what it does.
 

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
...hardware tessellation is not about better graphics, but better performance. I have yet seen it. Unigine benchmarks showed that Dx11 don't work the way it described in the document....

Seero.. I've already told you twice that you are not understanding what tessellation does..

It is not supposed to have better performance.. It is supposed to be better performance than if they produced something that looks as good using conventional means. It will obviously make the game run slower than without tessellation.. but look much better. That is the point, otherwise the kind of visual improvement we see with tessellation would not be possible without dozens of times the power and memory.

The unigine works exactly like it is supposed to, so does DX11. It is you that fails to understand what it is doing. There is nothing wrong with it.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Seero.. I've already told you twice that you are not understanding what tessellation does..

It is not supposed to have better performance.. It is supposed to be better performance than if they produced something that looks as good using conventional means. It will obviously make the game run slower than without tessellation.. but look much better. That is the point, otherwise the kind of visual improvement we see with tessellation would not be possible without dozens of times the power and memory.

The unigine works exactly like it is supposed to, so does DX11. It is you that fails to understand what it is doing. There is nothing wrong with it.
QFT, excellent response.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
8,347
9,728
136
The unigine works exactly like it is supposed to, so does DX11. It is you that fails to understand what it is doing. There is nothing wrong with it.

-I actually think Uniengine could have done a much better job as a Tesselation/DX11 benchmark (which is not to contest that it is using tesselation properly, only that it is misrepresenting what Tesselation does).

My thoughts:
Uniengine should do 3 passes of its benchmark
- One pass would be the low geometry version to get a baseline
- The second pass should be the high geometry mesh WITHOUT tesselation
- The third pass should be the Tesselated hi-geo mesh.

It should then compare the FPS results of the three passes to determine its final score. In such a way you not only determine the efficiency of the card's tesselation engine, but demonstrate your main point, which is no loss of visual quality from the hi-geo mesh while getting substantially better performance.

Say card A gets 60 FPS on the first pass, 20 FPS on the second pass and 40 FPS on the third pass while card B gets 60 FPS on the first pass, 25 FPS on the second pass and 35 FPS on the third pass. Card B is the more powerful card by virtue of raw power, but its not as good a DX11 card evidenced by its poorer tesselation score.

As it stands, the Uniengine bench is reminiscent of the old 3Dmark 2001 tests with their Lo-fi/Hi-Fi versions. It just makes it look like MS were a bunch of liars.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
-i actually think uniengine could have done a much better job as a tesselation/dx11 benchmark (which is not to contest that it is using tesselation properly, only that it is misrepresenting what tesselation does).

My thoughts:
Uniengine should do 3 passes of its benchmark
- one pass would be the low geometry version to get a baseline
- the second pass should be the high geometry mesh without tesselation
- the third pass should be the tesselated hi-geo mesh.

It should then compare the fps results of the three passes to determine its final score. In such a way you not only determine the efficiency of the card's tesselation engine, but demonstrate your main point, which is no loss of visual quality from the hi-geo mesh while getting substantially better performance.

Say card a gets 60 fps on the first pass, 20 fps on the second pass and 40 fps on the third pass while card b gets 60 fps on the first pass, 25 fps on the second pass and 35 fps on the third pass. Card b is the more powerful card by virtue of raw power, but its not as good a dx11 card evidenced by its poorer tesselation score.

As it stands, the uniengine bench is reminiscent of the old 3dmark 2001 tests with their lo-fi/hi-fi versions. It just makes it look like ms were a bunch of liars.
qft

Include the memory usage for each pass too.
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
-I actually think Uniengine could have done a much better job as a Tesselation/DX11 benchmark (which is not to contest that it is using tesselation properly, only that it is misrepresenting what Tesselation does).

My thoughts:
Uniengine should do 3 passes of its benchmark
- One pass would be the low geometry version to get a baseline
- The second pass should be the high geometry mesh WITHOUT tesselation
- The third pass should be the Tesselated hi-geo mesh.

It should then compare the FPS results of the three passes to determine its final score. In such a way you not only determine the efficiency of the card's tesselation engine, but demonstrate your main point, which is no loss of visual quality from the hi-geo mesh while getting substantially better performance.

Say card A gets 60 FPS on the first pass, 20 FPS on the second pass and 40 FPS on the third pass while card B gets 60 FPS on the first pass, 25 FPS on the second pass and 35 FPS on the third pass. Card B is the more powerful card by virtue of raw power, but its not as good a DX11 card evidenced by its poorer tesselation score.

As it stands, the Uniengine bench is reminiscent of the old 3Dmark 2001 tests with their Lo-fi/Hi-Fi versions. It just makes it look like MS were a bunch of liars.
Unigine is a demo of a DX11 engine using DX11 features. It's obvious that the visual quality is greatly enhanced when you run DX11 with all features enabled. What isn't obvious is the performance hit one would take should the same scene be rendered using more traditional methods. What you're asking is that the benchmark be designed to highlight the performance benefits of DX11 to the layman, which isn't what this benchmark is meant to do. It's just a demo of a DX11 engine, whose main goal I imagine is to be sold to developers.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
Unigine is a demo of a DX11 engine using DX11 features. It's obvious that the visual quality is greatly enhanced when you run DX11 with all features enabled. What isn't obvious is the performance hit one would take should the same scene be rendered using more traditional methods. What you're asking is that the benchmark be designed to highlight the performance benefits of DX11 to the layman, which isn't what this benchmark is meant to do. It's just a demo of a DX11 engine, whose main goal I imagine is to be sold to developers.
Despite of your believe, the point of Dx11 is to increase performance. SM5 extended the support to Object Oriented Programming to HLSL, very similar to C to C++. While things written in Dx10 will run in Dx11, the effects are miles apart. In short, it really isn't backward compatible. To utilize Dx11, things must all be writting in the OOE environment, which Dx10 can't support. To be able to make it look like it is backward compatible, 2 sets of code is needed. However, SM5 should, in theory, reduces memory usage, reducing traffic through RAM and buses, thus allow GPU to process more, which leads to increase in performance.

Tessellation has been in gaming for a very long time. Because tessellation is expensive, it is better to have objects pre-tessellated than to do tessellation on the fly. Both of these existed before Dx11. The problem of tessellation on the fly is it is a "trade off" from performance. Hardware tessellator is used to reduce the time it takes to do tessellation on the fly. By doing tessellation on the GPU, CPU will than be free and data through PCI-E is reduced, and since CPU/buses is usually the bottleneck, it is possible to do tessellation on the fly without hurting FPS with Hardware tessellator within a GPU. Now their may be a breaking point on the degree of tessellation before it starts to hurt FPS, but that is missing from the demo. Hardware tessellator existed in ATI hardwares, just not being used. As most of you already know, it existed since HD2000 series.

Compute Shader also increases the utilization of GPU, offloading computation from CPU and thus reducing traffics, and therefore increasing performance. Besides that, Dx11 also increase the number of threat it uses from 1 to 3, allowing more cores to be used, and therefore increasing performance.

So if there is no performance gain, then Dx11 is a hype. In theory, Dx11 is 300% faster than what is written in Dx10 if it is CPU dependent. I am not using that, I saw 100% slower in practice. It is suppose to be able to render objects as fast but with much less memory usage. I am not seeing this as the image of Dx10 looks like sh*t to begin with, comparing orange to apple. CPU, GPU and memory usage is completely missing.

While Unigine's demo may be attractive to laymen, it is next to a joke to those who really know stuffs. And yeah, Dirt 2 support Dx9 and Dx11, but not Dx10. Backward compatible? Memory usage between Dx9 and Dx11 is more or less same. Performance hit? Like a truck. Go on.
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
So if there is no performance gain, then Dx11 is a hype. In theory, Dx11 is 300% faster than what is written in Dx10 if it is CPU dependent. I am not using that, I saw 100% slower in practice. It is suppose to be able to render objects as fast but with much less memory usage. I am not seeing this as the image of Dx10 looks like sh*t to begin with, comparing orange to apple. CPU, GPU and memory usage is completely missing.

While Uniengine's demo may be attractive to laymen, it is next to a joke to those who really know stuffs. And yeah, Dirt 2 support Dx9 and Dx11, but not Dx10. Backward compatible? Memory usage between Dx9 and Dx11 is more or less same. Performance hit? Like a truck. Go on.
This is because they're using the performance gained from using DX11 as leverage to add more to the scene's complexity and still remain runable/playable. Anyone, laymen included, can obviously see this but for some reason, that remains a mystery to me, you cannot. The thing is, no one gives a shit that DX11 runs DX9 pathways more efficiently. We already have hardware with more than enough capability to run said pathways without issue. People want to see what it can do BEYOND what DX9 and DX10 can do. Ironically you state:
While Uniengine's demo may be attractive to laymen, it is next to a joke to those who really know stuffs.
which I will disagree with. What you seem to be demanding is that the DX10 version of the benchmark (and presumably, the DX11 version WITHOUT tesselation enabled) render the same scene as the DX11 with tesselation but using more traditional methods. This would take much longer to code, and no one would waste the time to be so redundant on something so obvious. People who already are in the know of DX11 and what it brings to the table are more interested in seeing its capabilities beyond what DX9 can do, not that it can do what DX9 does but more efficiently. You're missing the fact that a small minority of people actually know what framerate is, and even fewer actually care what it is. However, if you create a very life-like, detailed, and totally sweet alien model using tesselation (Alien vs. Predator), anyone can appreciate that. There's no point in having newer technology if you don't use it to improve anything, and no one gives a shit about benchmark junkies.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
This is because they're using the performance gained from using DX11 as leverage to add more to the scene's complexity and still remain runable/playable. Anyone, laymen included, can obviously see this but for some reason, that remains a mystery to me, you cannot. The thing is, no one gives a shit that DX11 runs DX9 pathways more efficiently. We already have hardware with more than enough capability to run said pathways without issue. People want to see what it can do BEYOND what DX9 and DX10 can do. Ironically you state:
which I will disagree with. What you seem to be demanding is that the DX10 version of the benchmark (and presumably, the DX11 version WITHOUT tesselation enabled) render the same scene as the DX11 with tesselation but using more traditional methods. This would take much longer to code, and no one would waste the time to be so redundant on something so obvious. People who already are in the know of DX11 and what it brings to the table are more interested in seeing its capabilities beyond what DX9 can do, not that it can do what DX9 does but more efficiently. You're missing the fact that a small minority of people actually know what framerate is, and even fewer actually care what it is. However, if you create a very life-like, detailed, and totally sweet alien model using tesselation (Alien vs. Predator), anyone can appreciate that. There's no point in having newer technology if you don't use it to improve anything, and no one gives a shit about benchmark junkies.
Now say it with a straight face that you see the difference again after watching this:
Stalker Dx11 vs Dx10.1 vs Dx10 vs Dx9

In case moving scenes is affecting your judgement, try to say it without laughing with this.
Crysis Dx8>Dx9>Dx10>Dx11

If you can still maintain your stand, try it again after this
Dirt 2 Dx9 vs Dx11

If you still can ... you win
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Now say it with a straight face that you see the difference again after watching this:
Stalker Dx11 vs Dx10.1 vs Dx10 vs Dx9

In case moving scenes is affecting your judgement, try to say it without laughing with this.
Crysis Dx8>Dx9>Dx10>Dx11

If you can still maintain your stand, try it again after this
Dirt 2 Dx9 vs Dx11

If you still can ... you win
None of those are of the Unigine. Try to focus for more than 10 seconds and read more than two sentences, maybe then we can have a discussion.

And I'd love to use shitty quality YouTube videos to compare high resolution graphics, but I'm not retarded.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
Seriously. It takes time to refine some things.
Need time refine is okay, but saying that it works now is different. As I already said, there will be a Dx11.x if it has something to do with the hardware. Again, I don't think Nvidia did it right either, so save money for the next generation video cards. As of now, Dirt 2 is the "big thing" about Dx11 and it actually runs much faster in Dx9, so why not just play in Dx9?

Hey, for ATI fans who brought a 5xxx card, you are lucky, the price is good enough even without Dx11 support unless you are smart enough to downgrade to a lower card just for Dx11. If you haven't brought it, then just keep in mind that there is nothing new that you are missing at the moment. Wait a bit until Fermi comes out and there will be another price cut.

If you did see big differences like Mrk6, by all means buy whatever you need to enjoy Dx11.
 
Last edited:

Schmide

Diamond Member
Mar 7, 2002
5,745
1,036
126
Need time refine is okay, but saying that it works now is different.

It works now but the effect is no where near what it could be if any of the games were built form the ground up to use the effect in the right places with the right lighting.

I notice a difference, especially in some of the specular effects. It is not as dramatic as it could be, but no reason to discount it as viable. It it does seem grafted. It will probably take a full cycle of having hardware before we see its true potential. Most developers will learn new techniques after the GDC Game Developers Conference Vancouver Canada May 6-7, 2010.
 

thilanliyan

Lifer
Jun 21, 2005
12,062
2,275
126
As of now, Dirt 2 is the "big thing" about Dx11 and it actually runs much faster in Dx9, so why not just play in Dx9?

Some (like me) would sacrifice some fps for better visual quality which DX11 DOES give without performance tanking...and like others have said DX11 isn't just about better performance.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
To quote a game programmer:

http://www.hardforum.com/showpost.php?p=1035179015&postcount=76

DX11 is also more a service pack for devs than anything consumers will notice in new games. Stuff like tesselation has been possible for the past decade, albeit with a larger performance hit. I actually would be shocked if DX11 games looked significantly better than DX9/10 games as that'd mean that devs of DX games have been slacking off all these past years.

DX11 isn't much of an advantage no matter how you cut it. It's more of a preference
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
Tesselation will be the new trend in measuring performance per fps per price. And its much more than gaming since it already has a massive appeal to the 3d editing/render world.
 

poohbear

Platinum Member
Mar 11, 2003
2,284
5
81
very nice read, thanks for the heads up! i was also skeptical about how involved AMD is with game developers cause i see no marketing on their part, but apparently they're very much involved, which is reassuring when i buy an ATI card.