Tesselation review by xbitlabs

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Tessellation is still in its infancy. Just like any other new technology, it will probably be another generation or two before any titles appear that make meaningful use of it. And by that time, tessellation hardware will have changed dramatically.




__________________

http://www.techreport.com/articles.x/19242/7
The 5830 and GTX 460 768MB are neck and neck, with no notable separation between them.
 
Last edited:

Madcatatlas

Golden Member
Feb 22, 2010
1,155
0
0
Final word on the matter should be:
At current hardware levels, meaning the GPUs of today. Tesselation as shown in the heaven benchmark is too taxing. Who wants a polygon/pixel slideshow?

tesselation is here to stay i think, and both Nvidia and AMD will work to make sure it gets the attention it needs in their upcoming generation of cards. And to be fair, Nvidias offering is a 6-7 months newer than AMDs and the ball is literally in AMDs hands. What will they do at 28nm?
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
How are these facts? Do we know how much of each chip is dedicated to tessellation? From everything I have read NVIDIA has a larger chip for GPGPU/CUDA support.

Can you back any of this up? :thumbsdown:

The only thing I see that is a fact (based on this article) is that NVIDIA stomps ATI when using tessellation. So much so that GTX460 can surpass the 5870. That's all that is important. In the end no gamer cares about the hardware, they care about how their games play and look.

Wow, the GTX 460 smokes the HD 5870 in Tessellation, damn, it doesn't stomp it, it squash it like a little bug.

At STALKER COP, the performance difference between the HD 5870 and the GTX 460 is low with the slight edge going to the GTX 460, at 1680x900, then the same story goes again at 1920x1080, but at 2560x1600, the HD 5870 is considerably faster nipping the heels of the more powerful GTX 480.

In Allien vs Predator, the same story happens again, the GTX 460 holds the slight edge against the HD 5870 in all resolutions, but besides the GTX 480 at 1680x900, at higher resolutions, none of those cards are playable.

In Dirt 2, the HD 5870 keeps the lead compared to the GTX 460 at all resolutions nipping the heels of the GTX 480 at 2560x1600.

In Metro 2033, a game that favors nVidia, the GTX 460 is unable to reach the HD 5870 across all the resolutions. The GTX 460 performs like an HD 5830.

In Stone Giant, the GTX 460 and the HD 5870 essentially matches.

In Unigine, the GTX 460 has a very slight edge against the HD 5870.

Everything in a brief, the GTX architecture is slightly faster than the AMD's HD architecture in Tessellation, period. By the time that's widely used, none of them will be playable. :p and currently, the HD 5870 is able to outperform slightly the GTX 470 in what matters now, today's games. By tomorrow, better games and better architectures will arrive. The same story happened when the X800 was pitted against the 6800 Ultra, the X800 remained the fastest solution in its entire life and by the time that SM3.0 was widely used, the 6800 Ultra was too slow for that. :)
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Everything in a brief, the GTX architecture is slightly faster than the AMD's HD architecture in Tessellation, period.

Slightly faster? You have NVIDIA's 4th fastest chip holding its own against AMD's flagship GPU.

When you compare the 480 to the 5870 they are not even in the same league.

Why do you keep trying to spin this?
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Slightly faster? You have NVIDIA's 4th fastest chip holding its own against AMD's flagship GPU.

When you compare the 480 to the 5870 they are not even in the same league.

Why do you keep trying to spin this?

Spin, look at your sig? Comparing a cheap and slower GTX 460 against the HD 5870. Using your logic, the HD 5870 is as fast as the GTX 480 because of Battlefield BC 2 results, please. :)
 

Madcatatlas

Golden Member
Feb 22, 2010
1,155
0
0
Slightly faster? You have NVIDIA's 4th fastest chip holding its own against AMD's flagship GPU.

might be their 4th fastest, but more importantly, its the only viable Nvidia card aside from a SLI configurations of 470s or 480s for the more hardcore/entusiastic of us. And even then you would still get the noise/heat issue with those SLI setups.

Right now a SLI 460 seems very tempting. But knowing ATIs next offering is right around the corner.., kinda hard to decide isnt it?
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
In Crysis Warhead at all resolutions, the GTX 460 is no match for the HD 5870 which happens that is also faster in the minimum framerate department.

In Battleforge DX10 at all resolutions, the GTX 460 can't match the HD 5870, heck, even the HD 4890 can match it.

In Battleforge DX11, can't match the HD 5870, at 2560x1600, is unplayable while the HD 5870 is totally playable.

In HAWX, it can't outperform the HD 5870, its more of a match for the HD 5850.

In L4D, the HD 5870 is even faster than the GTX 480, and the GTX 460 looses slightly to the GTX 285, last generation hardware!!!

In Battlefield BC 2, the GTX 460 1GB is far behind of the HD 5870 except in 2560x1600 which none of those cards is playable, including the GTX 480.

In STALKER COP, the HD 5870 stomps the GTX 460 1GB in all scenarios, not even the GTX 460 SLI can't outperform the GTX 480.

In Dirt 2, the GTX 460 can't reach the HD 5870, is more of a match for the HD 5850 again and even looses to it at 2560x1600.

In Mass Effects, the HD 5870 is in another performance level compared to the GTX 460 1GB that even has problems to outperform previous generations of hardware like the GTX 285/275.

"and with just how similar the GTX 460 and the Radeon 5850 are in terms of die size and power consumption there’s clearly some flexibility on their part to change things. The Radeon 5830 must come down in price or go away entirely, it’s what happens to the 5850 that’s the question. We’ve seen the GTX 460 lock horns with the 5850, and while the 5850 is undoubtedly the faster gaming card the $300 price point no longer makes as much sense as it once did with a $230 1GB GTX 460 below it. AMD either needs a 5840, or a price drop on the 5850 to bring its price more in line with its performance."

Source: http://www.anandtech.com/show/3809/nvidias-geforce-gtx-460-the-200-king

Does it matter than the GTX 460 can match the HD 5870 in only one thing? Tessellation? And what about gaming performance overall? And its performance is far from spectacular, by the time that Tessellation is widely used, the GTX 460 will be long gone. Snap out of your green dreams buddy, because this is reality and the GTX 460 overall performance is no match for the HD 5870.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
FACT: ATI's implementation when taking into account the size of the tessellation hardware is better then Nvidia's.

FACT: Nvidia's cards are performing better then ATI's card due to hardware size and not software.

It's not just about hardware-size... and nobody ever said anything about software.
The thing is that ATi has a single serial tessellator and triangle setup engine.
You can't just "make it bigger", it doesn't work that way.
nVidia has completely redesigned that part of the pipeline in order to be able to set up triangles in parallel.
That's not just throwing more transistors at the problem... This is a change that is about as significant as the introduction of unified shaders was.

ATi has their work cut out for them, this is a hurdle they have yet to take.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
tesselation is here to stay i think

It seems to be the only way forward if you want to increase geometric detail.
Other solutions are just getting too expensive in the memory footprint and bandwidth requirements. It's the old adage of working smarter, not harder.

and both Nvidia and AMD will work to make sure it gets the attention it needs in their upcoming generation of cards.

nVidia has already given it quite a bit of attention. Currently they have 15 tessellation units in parallel. They can just build on this design and scale up the number of units.

With ATi I'm getting a deja-vu of the geometry shaders in DX10... They were hailed as a great feature of DX10, and they would bring us tessellation, displacement mapping and everything... Problem was, since neither nVidia nor ATi came up with an implementation that performed adequately, nothing much ever came of the geometry shaders. Quite often a more bruteforce approach with more geometry, or by having vertex shaders perform redundant work, was faster than the 'proper/elegant' way using geometry shaders.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Does it matter than the GTX 460 can match the HD 5870 in only one thing? Tessellation?

Considering that is the topic of this thread...... Yeah it matters 100 percent! (but if you want to go off topic, I bet the 460 beats it in PhysX, 3D and Folding@home as well :p )

ATI's old architecture is starting to show its age in newer DX11 games. Until they actually have a new architecture (2011?). They are basically a generation behind NVIDIA.

Right now a SLI 460 seems very tempting. But knowing ATIs next offering is right around the corner.., kinda hard to decide isnt it?
Actually it's very easy to decide...I have no money so I'm sticking with my GTX260. ^_^

SLI 460's look to be the most promising and affordable SLI setup I've seen. I'm sure when Mafia II comes out I'm going to have to think about an upgrade....maybe.
 
Last edited:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Considering that is the topic of this thread...... Yeah it matters 100 percent! (but if you want to go off topic, I bet the 460 beats it in PhysX, 3D and Folding@home as well :p )

Folding@Home is true, Milky Way@Home works better on AMD hardware, PhysX doesnt matter, is too irrelevant for the market, and 3D isn't ready yet to take off, so all those are fat features with moot points that by the time is widely used, the GTX 4xx series will run out of steam (I had yet to see the Killer App for PhysX :D :D )

ATI's old architecture is starting to show its age in newer DX11 games. Until they actually have a new architecture (2011?). They are basically a generation behind NVIDIA.

You are right in this one, the HD architecture has been used for 4 generations and has giving great performance for so long, even in DX11 showing that it has the power to stay with the greatest of nVidia, a testament of AMD's masterpiece of engineering, but I think that they will need a new architecture after the HD 5x00.

Actually it's very easy to decide...I have no money so I'm sticking with my GTX260. ^_^

Well, the GTX 460 isn't a huge step up of what you have, so you are doing well.

SLI 460's look to be the most promising and affordable SLI setup I've seen. I'm sure when Mafia II comes out I'm going to have to think about an upgrade....maybe.

Is affordable, but seeing games like Just Cause 2 loosing against the GTX 480 is unnappealing to me, I would pay a little more and get a GTX 480. :)
 

epidemis

Senior member
Jun 6, 2007
794
0
0
It seems to be the only way forward if you want to increase geometric detail.
.

I feel it is kind of pointless to bump the polygon numbers up, nowadays people are so good at concealing the lack of the geometric detail with bump-maps and smart texturing :)
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I feel it is kind of pointless to bump the polygon numbers up, nowadays people are so good at concealing the lack of the geometric detail with bump-maps and smart texturing :)

I disagree.
Although games look pretty decent, they're still not quite at Pixar RenderMan quality.
So the way forward: tessellation.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
nVidia has already given it quite a bit of attention. Currently they have 15 tessellation units in parallel. They can just build on this design and scale up the number of units.

Nvidia has 16 units on the full gf100, one is removed on the 480 and two removed on the 470.

All ATI cards from the 5500 to the 5870 have the same number of units on board. This includes one DX11 compliant unit.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Nvidia has 16 units on the full gf100, one is removed on the 480 and two removed on the 470.

Yes, so 15 are currently used, best case.
They could easily scale up the design to 32 units or more on the next gen hardware.

All ATI cards from the 5500 to the 5870 have the same number of units on board. This includes one DX11 compliant unit.

Yes, and that's the problem. You can't really scale up or down when you only have a serial implementation.

Do you understand what I'm saying? ATi can't just add a second unit, let alone 15 or 16. They don't have any control logic whatsoever. They just feed all triangles through the pipeline serially (and there is your bottleneck, exponential dropoff because the rest of the chip is parallel, but cannot be fed, so all is reduced to low-end not-so-parallel performance quickly. You have all these shader units, which all have to push through the same tessellator unit. The throughput is very limited). In order to scale up, they need to completely redesign that part of the pipeline, in order to feed units in a parallel fashion. That's a whole lot of control logic that ATi doesn't have.
They took the easy way out, a minimal implementation that is DX11-compliant, but isn't very useful because it's a huge bottleneck, especially on the higher-end cards (aside from higher clockspeeds, they're just as poor as the low-end cards of the series. The tessellator doesn't scale up with the rest of the processing units on the GPU).

You keep reiterating the same basic facts, but you don't exactly show any sign of understanding the fundamental differences here.
Try reading this bit on nVidia's implementation:
http://www.anandtech.com/show/2918/2

You'll see that ATi has their work cut out.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
Here are some more tessellation comparisons, some demos from ATi and nVidia, and from the Microsoft SDK:
http://www.geeks3d.com/20100407/geforce-gtx-480-vs-radeon-hd-5870-dx11-tessellation-comparison/

One can clearly see the difference in scaling there, when the tessellation is turned up.
For example, the SubD11 sample...
From lowest to highest setting, nVidia goes from 468 fps to 80 fps. So basically only 17&% of the original performance left (but 31 times the amount of triangles(!), so not as bad as you may initially think).
The Radeon goes from 354 fps to 9 fps. So less than 3% of the original performance left. That's an order of magnitude more performance drop.

To look at it another way...
With the lowest tessellation setting, the Radeon reaches about 76% of the GeForce's performance.
With the highest tessellation setting, the Radeon drops to 11% of the GeForce's performance. Again, an order of magnitude.

The conclusion is clear: the more tessellation you add, the further the GeForces will pull ahead.
Which implies the following: With enough tessellation used, cheaper GeForce cards will be able to pull ahead of more expensive Radeon cards.
Tessellation is a very dangerous weapon in the world of DX11 gaming. If nVidia can persuade enough developers to use enough tessellation in their games, the result is that price/performance will be tilted to nVidia's advantage.
Tessellation is going to be a far more dangerous weapon for nVidia than PhysX.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
You keep reiterating the same basic facts, but you don't exactly show any sign of understanding the fundamental differences here.
Try reading this bit on nVidia's implementation:
http://www.anandtech.com/show/2918/2

You'll see that ATi has their work cut out.

You keep saying the same thing over and over again... I am saying that Nvidia's design is really bad in terms of die size and efficiency. When you have a single unit doing quite well against Nvidia's design, something is wrong.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I am saying that Nvidia's design is really bad in terms of die size and efficiency. When you have a single unit doing quite well against Nvidia's design, something is wrong.

Have you designed many IC's? Serious question.

I have been involved in the design cycle for IC's for years and there are many many many trade-offs that are made on the long road to designing an IC.

You have design time (are you given 3 months or 6 months to finalize a given circuit?), you have design resources (are you given 10 design engineers or 2 engineers to finalize that circuit), you have yield entitlement concerns (do you go with potentially more problematic circuits in exchange for a little higher clockspeed potential?) and you have debug ramifications...if you design a circuit that is a teeny bit denser but requires 20% more time on the tester to verify functionality then is that a good use of your production cost budget.

There are so many trade-offs that go into determining the constraints on a circuit design that attempting to reduce it to something as simplistic as die-size or transistor counts to make a judgement call on whether something is wrong about the design is just silly.

For one thing you have no insight into the development resources or priorities given to any given circuit element. You can have marketing pressures dictate the priorities above and beyond engineering (netburst anyone?).

That's not to say poor project management never happens, to be sure inefficiencies in design and development occur. Just saying you can't make such a claim based on so little information regarding how the design choices were made.

If a design performs to 0.9x the competition in a certain function but the designer only spent 0.5x to design-in that function versus what the competition spent then you can't argue against efficiency. We don't know what Nvidia spent on this specific function, arguing that it is inefficient without knowing the budgetary/time/priority/trade-off matrix is pointless. Hell its pointless even if you did know all the specific.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
You keep saying the same thing over and over again... I am saying that Nvidia's design is really bad in terms of die size and efficiency. When you have a single unit doing quite well against Nvidia's design, something is wrong.

I guess you don't get what I'm saying.
ATi's single unit design cannot scale upwards, and is already a bottleneck in their current GPUs.
They have no way forward.
And no, they aren't doing well, it's just that tessellation isn't a common feature in games yet. You can get away with an inefficient unit if games don't tax it.
But nVidia will make sure that they do.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
We don't know what Nvidia spent on this specific function, arguing that it is inefficient without knowing the budgetary/time/priority/trade-off matrix is pointless. Hell its pointless even if you did know all the specific.

Bottom line is: ATi isn't doing the same as nVidia.
You can't compare the two.
It's like saying "An Atom is more efficient than a Core2 Duo because it uses less transistors".
Yea it does, but in the process it gives up a lot of features that allow parallelism to be extracted from the code, and allow for much more efficient use of the execution backend. You pretty much NEED that functionality to reach a certain level of performance. Else you will be limited too much by things such as latency and bandwidth, because you cannot execute other tasks in parallel.
It's not entirely a coincidence that Anandtech compared it with in-order vs out-of-order CPUs.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
We don't know what Nvidia spent on this specific function, arguing that it is inefficient without knowing the budgetary/time/priority/trade-off matrix is pointless. Hell its pointless even if you did know all the specific.

I never said I knew the specific's of what Nvidia's MASTER plan is. The simple fact is that the current architecture in it's current form is not worth the amount of space it takes up on the die. This is coming from FPS to die size calculations. Is this calculation taking into account all aspects? No of course it is not.

We can argue all day long about the marketing department putting restraints on the engineers or vise versa. The only thing that matters is the end result. I never said in a year from now that the product will be better or worse.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
54
91
I never said I knew the specific's of what Nvidia's MASTER plan is. The simple fact is that the current architecture in it's current form is not worth the amount of space it takes up on the die. This is coming from FPS to die size calculations. Is this calculation taking into account all aspects? No of course it is not.

We can argue all day long about the marketing department putting restraints on the engineers or vise versa. The only thing that matters is the end result. I never said in a year from now that the product will be better or worse.

Which is gaming? Performance? What is the end result you are referring to? Lets Be honest. If you were really concerned with the end result, which is performance in games, then transistor efficiency should mean squat to you. You want to game, not constantly think about the performance of each transistor as you make your way through a game.

Caring about die size was "invented" when there was nothing else to argue with. So now it's the norm when an average gamer could care less. IMHO.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Which is gaming? Performance? What is the end result you are referring to? Lets Be honest. If you were really concerned with the end result, which is performance in games, then transistor efficiency should mean squat to you. You want to game, not constantly think about the performance of each transistor as you make your way through a game.

Caring about die size was "invented" when there was nothing else to argue with. So now it's the norm when an average gamer could care less. IMHO.

Well, the inefficiency of Fermi is killing its sales. Don't you agree?