City Of The Future Tessellation demo

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Gikaseixas

Platinum Member
Jul 1, 2004
2,836
218
106
I'm saying that it takes time to get something as complex as that to work perfectly.
It probably took nVidia at least a year to perfect their PolyMorph engine, perhaps longer.
As far as I can tell, AMD has not yet started on development, so even if they DO choose to develop something similar, which they certainly will be capable of, it's going to take a while until we have hardware in our hands with an actual working implementation.

Stop being so negative. You, and many others, sound like you just can't handle reality.

thanks for the well elaborated answer. Really

They might be working at it now or even being working for some time, nobody's sure.

Not being negative, usually you expect the worse from them that's all.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Not being negative, usually you expect the worse from them that's all.

I expect from AMD what they tell us to expect, either directly or indirectly.
AMD has not yet spoken of a new-and-improved tessellator design... they have just released a new range of videocards which have the same tessellation bottleneck as the previous generation... and AMD is trying to downplay the importance of tessellation in the media.
I think I know what to expect.
 

Leadbox

Senior member
Oct 25, 2010
744
63
91
Let me ask you a simple question:
Microsoft defined the DirectX 11 standard with a maximum tessellation factor of 64. Does it sound like a good implementation if the hardware starts bottlenecking at a tessellation factor of 11?
That is nowhere even near the maximum of the DirectX 11 standard.

Erm.....Hmmmm!
Don't know! but beyond a certain level I'm thinking maybe its pointless, diminishing returns etc?
How about adding another tessellator would that change anything?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
The problem with those demos is that they fully utilize one or more GPUs just to do this one thing... and this one thing looks amazing, but a game requires MANY things being done at once.
Nowadays we are finally seeing characters in games are beutifully rendered as tech demos of DX7 class GPU hardware... because a DX7 GPU can render ONE such character in an empty void, a modern DX10/11 GPU can render dozens of them at once, in a room, with lights, dynamic shadows, AA, and with programmable shaders also physics and AI, etc etc etc.

Its not console porting that is limiting PC games from looking like that, its the fact that a game is more then whats in those tech demos.

what I found the most amazing is the way tesselation made those statues that looked like a guy, I completely underestimated its capabilities and assumed it just smooths things, but it actually procedurally generates content... the fact that it was able to generate the muscles and helmet and other fine details was amazing. And 10x the PCIe bandwidth's worth of data means there is a significant advantage to using tessellation in real time rather then creating such finely detailed objects to begin with (I previously assumed that tessellation is just a way to be lazier about model creation rather then enabling higher quality models)
 
Last edited:

ugaboga232

Member
Sep 23, 2009
144
0
0
THe point was that HAWX 2 and Civ 5 (as well as Crysis, by the way) use technology that is well beyond what current consoles are capable of. Civ 5 also makes use of DirectCompute.
So the claim that games won't go beyond what current consoles are capable of is patently false. Plenty of games use PC-only features.

They still use a very similar method, just higher res assets, more post-processing effect, better shaders. If you built a game from the ground up with tessellation as you want, consoles and non DX11 pc's would have much lower graphics. The problem with tessellation is that to use it to its maximum efficiency, you have to design your game differently than if you want it to be compatible with consoles and older pc's (otherwise you are stuck with what we have today, and no one wants to have 2 separate engines).
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Erm.....Hmmmm!
Don't know! but beyond a certain level I'm thinking maybe its pointless, diminishing returns etc?

Not really. You could almost say that tessellation works the other way around. The more tessellation you use, the more memory and animation overhead you can save. Or conversely, the more tessellation you use, the more detail you can get at a very low cost.

How about adding another tessellator would that change anything?

Not really, as it's a two-dimensional problem, as I already said.
Roughly speaking, let's assume we have a single quad.
Tesselation factor:
2x2 will give us 4 quads.
4x4 will give us 16 quads.
..
11x11 will give us 121 quads.
..
16x16 will give us 256 quads.
...
64x64 will give us 4096 quads.

As you can see, it scales up quadratically.
Adding a second tessellator will only make it twice as fast, in the best case. So assuming that a single tessellator will get stuck at factor 11, adding a second tessellator will 'only' get you to about level 15-16, not to level 22 as you may have thought.
You really need to add a whole array of units, as nVidia has done, else there's no hope of scaling anywhere near 64.
 

RobertR1

Golden Member
Oct 22, 2004
1,113
1
81
I'm not in denial, I'm probably the only one here who actually understands tessellation. So I'm the only one who actually has an informed opinion on the matter.

You might find more who understand tessellation at Beyond3d. Try posting there also.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
The problem with those demos is that they fully utilize one or more GPUs just to do this one thing... and this one thing looks amazing, but a game requires MANY things being done at once.

Well yes, this is tessellation taken to the extreme... but still, this is a 500x(!) increase in detail.
If you take even a fraction of that, it's still going to be dramatically improving games, and that can easily be pulled off with a single GPU in a complete game.

what I found the most amazing is the way tesselation made those statues that looked like a guy, I completely underestimated its capabilities and assumed it just smooths things, but it actually procedurally generates content...

Exactly, that is the key.
This also explains why most current games have 'fake' tessellation.
They were not designed for tessellation, so the information for generating the content is not stored in the geometry.
Which is why they mostly do what TruForm already did many generations ago: they just add a few triangles to 'smooth' things, by guessing how the surface is shaped, based on the normal vectors and positions of the geometry.
This often leads to 'blown up' effects, and doesn't really look good. It also doesn't save any memory and processing power, because you still use the detailed geometry as a basis for your tessellation.
Tessellation in DX11 can do so much more.

If you have a content generation pipeline which is set up for tessellation, you can generate the detail offline, for older hardware. If you want to do it the other way around, it's going to be much harder.
Theoretically there isn't really a limit to how detailed you want to make it, so tessellation is very easy to scale up and down for various levels of hardware. Which is all the more reason not to artificially limit it to the lowest common denominator.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
You might find more who understand tessellation at Beyond3d. Try posting there also.

I doubt it, that site is owned by Dave Baumann, an AMD PR guy. They only 'understand' tessellation the way AMD 'explains' it. Which is not what you see in this demo.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Why don't you explain tessellation to all of us, to prove that you understand the concept fully.


You bear the burden of proving your knowledge, not me. I never claimed to "probably" know more about tessellation than anyone else on here.

That said, to my knowledge, the point of hardware tessellation is so an application can send limited data about key points and shapes and how it would like the GPU to stretch the shapes, instead of the mountain of data necessary to describe each individual triangle. So from a slab of flatness, the GPU knows where to cut it up and stretch it out, as shown in the video you linked to. A practical benefit of this is that it helps keep memory bandwidth from clogging up with all the data that would otherwise be needed to describe the 3D model. I know this is not a sophisticated explanation, but my tessellation knowledge is not on trial here, nor is yours. I added your quote to my sig line to poke fun at your attitude, not your technical knowledge. We don't even disagree that NV's mini-tessellators parceled out works better than AMD's bolted-on fixed tessellator.

Anyway, I can support my statements without needing to know ANY technical details of DX11 tessellation. I already have, earlier in this thread. I can flesh it out for you, though:

There is no mass-adoption of DX11 tessellation in games because:

1. Consoles make up the vast majority of video game sales worldwide, and none of them have DX11 capabilities.
2. To increase revenues and lower development costs, most game developers produce cross-platform games and thus are limited by the lowest common denominator, which is usually either the Wii, XBox360, or PS3. Sometimes the Wii gets left behind because it is the lowest-spec'd console graphically.
3. There are no announcements for a 2011 console with DX11 capabilities.
4. Even if there were a secret 2011 console release with DX11, given historical rates of market adoption, mass adoption of DX11 tessellation in consoles won't happen until 2012 or later.
5. Therefore, for at least through 2011, consoles effectively have no DX11-compatible tessellator. (The Xbox tessellator is not compatible with DX11 tessellation.)
6. Therefore, most game developers will code primarily with console capabilities in mind. Therefore the core of the game will be DX9-caliber. If gamedevs do implement DX11 features such as tessellation, it will be to improve graphics/gameplay but the game must still be playable and look nice in DX9.

As for the few gamedevs who a) make PC games and b) spend time implementing DX11 tessellation:

One may buy a Barts GPU without too much concern about its tessellation capability for at least 2010-2011 (and probably longer than that), because:

1. In April 2010, AMD owned ~100% of the DX11 market. By October 2010, AMD still owned ~90% of the market and claimed to have shipped 25 million DX11 GPUs. (Note that the SHS for Sept. shows that ~85% of Steam users with DX11 GPUs had AMD GPUs.)
2. The Barts/Cayman/Antilles rollout in late 2010 is unlikely to hurt AMD's DX11 marketshare and in fact may increase it, since the GTX460 is no longer unchallenged in the $150-250 price range.
3. AMD is trumpeting its marketshare, GPUs sold, and (in your view) FUD about tessellation in order to convince gamedevs to not "over"tessellate, as can be seen in the marketing slides that AMD sent around re: Barts and in Huddy's comments about overtessellation.
4. NV's Polymorph engine scales up/down with their GPUs, not like AMD's fixed tessellation capabilities in Evergreen.
5. Of the gamedevs who tack on DX11 tessellation in their upcoming games, the above would probably convince them to use DX11 tessellation in moderation so as not to cripple performance on AMD GPUs (or on, say, a GTS 450; keep in mind that the market for high-end GPUs is small relative to the market for lower-end GPUs), which make up ~90% of the DX11 GPU market.
6. The amount of tessellation needed to cripple Barts performance is likely to seriously impact NV's GTS 450 and even the GTX460 GPUs as well, thus providing a suboptimal user experience for consumers owning either Barts or GF104 who run the game at such settings.
7. Gamedevs prefer to deliver good user experiences to get good reviews/user experiences and thus to sell more games. They also don't like diverting human resources away from DX9 visuals towards DX11 visuals unless they expect some sort of reward for it.
8. Give the above, even if a gamedev designed a game with extreme tessellation for some reason, there would probably be a medium tessellation option. And judging by what I've seen so far, the visual difference between medium and extreme tessellation is small.
9. Therefore the extreme tessellation that NV displayed in its City video at GTC 2010 is unlikely to be found in games from now through 2011 at the very least.
10. We've already seen Barts hold its own against GF104 in Civ V, and even in the HAWX2 benchmark it scores good fps, just not as stellar as GF100 or 104.

City-of-the-Future-like extreme tessellation won't be in games anytime soon. I'm not thrilled about this, since I'm a PC gamer and not a console gamer. But it is what it is. We'll continue getting console ports built on the same old DX9 chassis with some DX11 features bolted on. *sigh* Even PC-exclusives won't necessarily push DX11. StarCraft II, the biggest PC-exclusive game of 2010 doesn't even go past DX9. Civ V is nice but it's one game among thousands--and it runs fine on Barts anyway so it's clearly not pushing extreme tessellation.
 

thilanliyan

Lifer
Jun 21, 2005
12,085
2,281
126
As far as I can tell, AMD has not yet started on development, so even if they DO choose to develop something similar, which they certainly will be capable of, it's going to take a while until we have hardware in our hands with an actual working implementation.

Can you please share with us why you believe that to be the case?
They have not even released their full lineup of new cards. You could at least wait until then (end of November?) to pass judgement...I'm pretty sure we still won't have any games of Unigine's or this demo's tesselation levels for AMD's current lower tesselation performance to matter (ie. become unplayable in DX11 on AMD cards).
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
1
0
You bear the burden of proving your knowledge, not me. I never claimed to "probably" know more about tessellation than anyone else on here.

I have already proved it many times over the past few days, in the various threads discussing tessellation.
The responses made me draw the conclusion that apparently not many people quite grasp the subject.
As you see from taltamir's response, he was not fully aware of what tessellation does either. It's not 'just' a smoother (well it is in the way AMD intends to use it, with low tessellation factors).
 

Scali

Banned
Dec 3, 2004
2,495
1
0
So when needed (assuming the size isn't prohibitive) they could just add a tessellator to every SIMD and be be done with it then.

The 'just adding' part is the difficulty.
Anandtech compared it to adding OOO to a CPU.
It's quite difficult to do two or more things in parallel that have technically always been serial operations, and have the entire processor produce the exact same results as a serial processor in every possible situation.
http://www.anandtech.com/show/2918/2
Anandtech said:
While the PolyMoprh Engine may sound simple in its description, don’t let it fool you. NVIDIA didn’t just move their geometry hardware to a different place, clone it 15 times, and call it a day. This was previously fixed-function hardware where a single unit sat in a pipeline and did its share of the work. By splitting up the fixed-function pipeline like this, NVIDIA in actuality created a lot of work for themselves. Why? Out of order execution.
OoO is something we usually reserve for CPUs, where high-end CPUs are built to execute instructions out of order in order to extract more performance out of them through instruction level parallelism. OoO is very hard to accomplish, because you can only execute certain instructions ahead of other ones while maintaining the correct result for your data. Execute an add instruction that relies on a previous operation before that’s done, and you have problems.

So that's why it's going to take a while to add more tessellation units to your hardware.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I have already proved it many times over the past few days, in the various threads discussing tessellation.
The responses made me draw the conclusion that apparently not many people quite grasp the subject.
As you see from taltamir's response, he was not fully aware of what tessellation does either. It's not 'just' a smoother (well it is in the way AMD intends to use it, with low tessellation factors).

I said you have the BOP of proving your statements, but that it didn't matter because your knowledge of tessellation is not at trial here. It still isn't.

My issue is not with the technical details of tessellation, just that you've been beating Barts up for not having its own Polymorph array. My point is that DX11 adoption isn't that fast among developers to begin with, and even among gamedevs using DX11 tessellation, most will choose to cater to--you guessed it--the lowest common denominator (AMD's fixed tessellator unit). I didn't mean to threadjack though, so I will now shut up about my predictions about how soon we will see CityoftheFuture-caliber tessellation in games.

P.S. B3D is not the AMD fanboy club you are trying to make it seem like. And I've seen you post there before, anyway. :)

/threadjack
 

Scali

Banned
Dec 3, 2004
2,495
1
0
So basically, it is just your own belief. I think both companies make it a point to not talk about unreleased hardware, so unless you have an inside source (do you?), you have no proof either way.

AMD's 6900-series is up for release less than a month from now, right?
Why would they bother making a big deal out of the alleged 'over-tessellation' that is used in various games/benchmarks at the release of their new midrange products, if their soon-to-be-released high-end products would be able to beat nVidia in these alleged 'over-tessellated' scenario's anyway?
It didn't sound at all like they were saying "Okay, our midrange hardware is not THAT good at tessellation, but just watch what we have in store for you next month!".
No it really sounds like "Shizzle, our upcoming high-end hardware is going to get creamed in those tessellation-scenarios as well".


We allow cussing in P&N and OT, not in the tech forums.

Moderator Idontcare
 
Last edited by a moderator:

ugaboga232

Member
Sep 23, 2009
144
0
0
We know very little about the 6970, and what we do know shows it faster than the 480 even in Unigine. AMD didn't release much info about the 6870, so why would they tip their hand about the 6970 this time?
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
1
0
My issue is not with the technical details of tessellation, just that you've been beating Barts up for not having its own Polymorph array.

Well, you've seen what DX11 tessellation can do in the video here.
None of AMD's hardware can do this. They cannot turn low-detail geometry into mega-detailed geometry. That's a problem, since apparently DX11 was designed to do exactly this.
I think it's perfectly normal to criticize any hardware that does not impement a standard in a meaningful way. Heck, we criticize Intel all the time, for their IGPs which are basically too slow to run anything DX10/11 anyway.

My point is that DX11 adoption isn't that fast among developers to begin with, and even among gamedevs using DX11 tessellation, most will choose to cater to--you guessed it--the lowest common denominator (AMD's fixed tessellator unit).

I guess you are convinced of this, no matter how many examples I will produce of the opposite. Game developers have always adopted new technology early on. They never choose to cater to the lowest common demoninator. They choose to add optional effects for whatever new hardware there is. This practice is as old as gaming itself.
It's pointless trying to discuss this with you, because you simply will not see the facts.

P.S. B3D is not the AMD fanboy club you are trying to make it seem like. And I've seen you post there before, anyway. :)

The reason I stopped posting there is that they are an AMD fanboy club.
AMD pulled a similar stunt a few years ago, when shadowmapping was coming up. nVidia had added some extensions to Direct3D which allowed better quality and performance with shadowmaps. AMD was trying to downplay it all...
But in less than two years DX10 arrived, and nVidia's extensions had become a standard feature. Now every game uses this shadowmapping technology (and many games already used it in the DX9 era, from as early on as Far Cry).
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Is there an HD version of that Video cause I m having trouble seeing the added detail in the city. The water was just amazing though.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
We know very little about the 6970, and what we do know shows it faster than the 480 even in Unigine. AMD didn't release much info about the 6870, so why would they tip their hand about the 6970 this time?

Why would they tell developers to decrease the tessellation/image quality in an upcoming game, if their 6900-series could easily render the higher detail version as well as nVidia's offerings?
The fact that they've tried to get Ubisoft to lower the quality in HAWX 2 is pretty much an admittance that the 6900 series is not going to run it very well.
 

thilanliyan

Lifer
Jun 21, 2005
12,085
2,281
126
AMD's 6900-series is up for release less than a month from now, right?
Why would they bother making a big deal out of the alleged 'over-tessellation' that is used in various games/benchmarks at the release of their new midrange products, if their soon-to-be-released high-end products would be able to beat nVidia in these alleged 'over-tessellated' scenario's anyway?
It didn't sound at all like they were saying "Okay, our midrange hardware is not THAT good at tessellation, but just watch what we have in store for you next month!".
No it really sounds like "Shit, our upcoming high-end hardware is going to get creamed in those tessellation-scenarios as well".

They have not actually said anything about 6900 cards yet. Read into it whatever you will. I will wait to pass judgement. If the new cards are better at tesselation well they would have addressed one of the drawbacks of ATI's current cards, but it still will not make much of a difference to games as there are none (and IMO won't be for a while, but I'd love to be wrong because I am a bit of a graphics whore) that utilize the levels of tesselation that you promote. If their new cards are no better at tesselation, boo on them, but it still probably won't make much of a difference in the grand scheme of things.
 
Last edited:

ugaboga232

Member
Sep 23, 2009
144
0
0
"I guess you are convinced of this, no matter how many examples I will produce of the opposite. Game developers have always adopted new technology early on. They never choose to cater to the lowest common demoninator. They choose to add optional effects for whatever new hardware there is. This practice is as old as gaming itself.
It's pointless trying to discuss this with you, because you simply will not see the facts."

Lolwut? Seriously, only a few developers (Crytek is one of the more noteworthy) use new technology to such a level as shown in this tech demo or really any tech demo. That is why its a tech demo and not a real engine that can be used for consumer graphics. It takes at least 1 480 GTX (sounds like 2+) to run the tessellation portion, and that leaves out most of nvidia's customers as well. One problem with the PolyMorph is that is scales with the amount of SIMDs, so on lower gpu's, there is less tess performance.

There is absolutely 0 evidence that AMD is trying to lower quality on Hawx 2. Please stop making stuff up when you are a self-titled graphics king (aka we plebeians of the forums know naught of tessellation).

Also, the 6970 is not the only GPU AMD will be selling. So they wish to improve performance for all their chips. Doesn't take a genius to figure that out.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
1
0
"I guess you are convinced of this, no matter how many examples I will produce of the opposite. Game developers have always adopted new technology early on. They never choose to cater to the lowest common demoninator. They choose to add optional effects for whatever new hardware there is. This practice is as old as gaming itself.
It's pointless trying to discuss this with you, because you simply will not see the facts."

Lolwut? Seriously, only a few developers (Crytek is one of the more noteworthy) use new technology to such a level as shown in this tech demo or really any tech demo. That is why its a tech demo and not a real engine that can be used for consumer graphics. It takes at least 1 480 GTX (sounds like 2+) to run the tessellation portion, and that leaves out most of nvidia's customers as well. One problem with the PolyMorph is that is scales with the amount of SIMDs, so on lower gpu's, there is less tess performance.

That's not even the point.
Point is that game developers have adopted any new version of DirectX very early on. They have adopted 3d acceleration itself very early on as well, when only 3DFX offered an accelerator, and barely any gamers owned one.
Currently DX11 has been on the market for about a year, and already we have quite a few titles making use of it. We also have lots of games making use of DX10. This in the light of all consoles being only DX9-level hardware.
So clearly consoles don't really dictate what game developers do.

There is absolutely 0 evidence that AMD is trying to lower quality on Hawx 2. Please stop making stuff up when you are a self-titled graphics king (aka we plebeians of the forums know naught of tessellation).

The HAWX 2 developers as saying exactly that, I believe it was on Tomshardware.

Also, the 6970 is not the only GPU AMD will be selling. So they wish to improve performance for all their chips. Doesn't take a genius to figure that out.

Not exactly. They always go for the high-end. The halo-effect.
Doesn't take a genius to figure out that video cards have always been marketed this way.

Edit: found it:
http://www.tomshardware.com/reviews/radeon-hd-6870-radeon-hd-6850-barts,2776-2.html
It remains to be seen whether AMD can get the developer to add a slider for geometry detail. At least, that's the current plan, according to company representatives.

So AMD wants them to add an option for lower detail. Originally they wanted to REPLACE the code with a lower detail version, as said in the mail sent to review sites:
http://hardocp.com/news/2010/10/20/benchmark_wars
AMD has demonstrated to Ubisoft tessellation performance improvements that benefit all GPUs, but the developer has chosen not to implement them in the preview benchmark.
 
Last edited: