• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Here's what AMD didn't want us to see - HAWX 2 benchmark

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
But how do you know their implementation is standard? You don't think it is possible the code is much more nVidia focused (aka taking stress off of texture and memory stuff)? I just find it awfully convenient this happens on an nVidia sponsored game and with Ubisoft's checkered history. Also, why would Ubisoft want to take advantage of the nVidia DX11 market when its 6 to 9 times smaller than AMD's? Isn't that a little strange?

However, I do agree that the tessellation hardware could use some work, but Cayman may yield further improvements.
 
But how do you know their implementation is standard?

Tessellation is a standard DX11 feature, and it works on Radeons, doesn't it?

You don't think it is possible the code is much more nVidia focused (aka taking stress off of texture and memory stuff)?

And what if it is? It's standard DX11 code. nVidia's tessellator is a lot more powerful.
Not because nVidia is being unfair, but because nVidia's engineers spent a lot of time on perfecting their tessellation circuitry, resulting in what they call the PolyMorph engine.

I just find it awfully convenient this happens on an nVidia sponsored game and with Ubisoft's checkered history.

How convenient is it then, that nVidia's hardware has the same huge performance difference in AMD's own detail tessellation sample?
Why don't we stop the conspiracy theories, and just accept that nVidia developed a MUCH MUCH MUCH better tessellator, which even runs AMD's own DX11 code (optimized for AMD's hardware) MUCH MUCH MUCH better.

Also, why would Ubisoft want to take advantage of the nVidia DX11 market when its 6 to 9 times smaller than AMD's? Isn't that a little strange?

No it's not. As pointed out before, DX11 in itself is only a small percentage of the total market in general.
Besides, DX11 is a standard. Just because AMD's *current* DX11 hardware isn't good at it, doesn't mean that AMD's future hardware won't.
In fact, I think this is all the more reason NOT to bow down to AMD's wishes. By sticking with the more advanced tessellation code, you will be putting more pressure on AMD to improve their tessellator in future hardware.
Just like luckily most developers did not bow down to nVidia's wishes when their GeForce FX couldn't handle SM2.0. Most games used SM2.0 heavily anyway, where only Radeons could realistically run these games in SM2.0 mode. The result: Radeon users got full use of their hardware, and nVidia's next generation of hardware had SM2.0 performance that could rival the Radeons (and the nVidia users could play all those older SM2.0 titles without a problem as well).

However, I do agree that the tessellation hardware could use some work, but Cayman may yield further improvements.

I doubt it. If they have a parallel tessellator, why would they NOT put it in the 6800 series?
And if their upcoming hardware (wasn't it going to be released next month?) is going to be great at tessellation anyway, why would you put so much energy into downplaying tessellation now?
It would have been much better to say "Okay, our new midrange hardware isn't that good at tessellation. Wait for our upcoming high-end hardware, it's going to beat nVidia at their own game!"
I think it's already pretty obvious that AMD does not have an answer to nVidia's tessellation hardware. If you know anything about designing GPUs, you'll realize that what nVidia has done is not exactly trivial. Compare it to HyperThreading on CPUs if you like. Intel has had it for years, and AMD still cannot come up with something similar. Not even their upcoming CPUs will have it.
 
Last edited:
But isn't Hawx 2's implemenation basically offloading most tasks that would normally be stressing the memory and texture subsystems onto the tessellator? Bare with me, because I do not have your level of expertise in this field.

Still, there are about 25 Million AMD cards that won't perform well on this (more like 10 million when you get rid of the low end stuff), and its going to take Ubisoft a while to capitalize on the benefits of forward thinking graphics (which I think are good and do agree 100% that as of now, AMD needs some serious tessellator work). While it helps us, the consumer who gets the later (or the subset that gets it now with an nVidia card), it does no favors for Ubisoft itself.

I'm back to the lets wait for Cayman and AMD's driver to find out stance.
 
Compare it to HyperThreading on CPUs if you like. Intel has had it for years, and AMD still cannot come up with something similar. Not even their upcoming CPUs will have it.

I just wanted to say that hyperthreading has a performance boost of +35% or so with 2 cores (1 physically only, useing hyperthreading).

Amd has 2 cores in 1, where they share a few things... which should give a performance boost of +80% or so with 2 cores (2 physical cores, shareing).

I think Amds solution is better, because when you dont need extra cores, the 2 cores become 1 core that has a 2 cores stuff all to it self to become a super single core. That way if games can use many cores, it works, if it doesnt, it works.

Hyperthreading is great though... took along time for amd to come up with anything that was simular.


Im curious how those bulldozers performe... though Im still thinking intel will have the performance crown. Intel are just a abit more ahead of the game.
 
But isn't Hawx 2's implemenation basically offloading most tasks that would normally be stressing the memory and texture subsystems onto the tessellator? Bare with me, because I do not have your level of expertise in this field.

I've already explained it many times, also pointing to Microsoft's own DX11 documentation.
Search for my posts, I'm not going to bother explaining it again.
Short version: the whole point of tessellation is to generate detail on the GPU, rather than storing it in memory. Getting the stress off the memory is one of the main points of tessellation in the first place!

Still, there are about 25 Million AMD cards that won't perform well on this (more like 10 million when you get rid of the low end stuff)

Well, boohoo then.
AMD should just have made better cards then. And if people aren't satisfied with the performance of their AMD cards, perhaps they should just buy nVidia.
Why is everyone trying to find the best possible solution for AMD?
Who cares about AMD anyway? This is about competition and technical progress. nVidia is competing with superior technology. If AMD wants to do something about it, they should improve their tessellator.

Really, no offense, but are you people all n00bs or something? Game developers have NEVER waited for technology to trickle down into the mainstream before using it.
It's all about cutting-edge features. We've seen it numerous times... for example with 32-bit code, when the majority still had 16-bit machines. Or 3D accelerator support, when the first Voodoo's were only just on the market... or soundcard support, when hardly anyone even knew what a soundcard was in the first place. Or a more recent example... how about DX11 games? Hardly anyone had Windows Vista/7 and a DX11 card, yet AMD made sure that a lot of titles supported DX11 shortly after their launch of the 5000-series (which somehow was not a bad thing, even though nVidia did not have DX11 hardware until months later?)
 
Yes.. nvidia has superior technology... i have no idea how you can defend such a claim looking at the die size difference between the two competitors.

Certainly you could say nvidia has different tech, but superior: hell no.

If anything, looking at die size, amd has the superior tech. especially if the metric that you compare them at is gaming. If AMD added components to equal Nvidias diesize for Fermi, it would probably beat it in most if not all gaming benchmarks (except the obvious one or two)
Coupled with this, you have to count in the fact that Cypress was 7-8 months old tech when Fermi finally launched.
 
Yes.. nvidia has superior technology... i have no idea how you can defend such a claim looking at the die size difference between the two competitors.

Certainly you could say nvidia has different tech, but superior: hell no.

If anything, looking at die size, amd has the superior tech. especially if the metric that you compare them at is gaming. If AMD added components to equal Nvidias diesize for Fermi, it would probably beat it in most if not all gaming benchmarks (except the obvious one or two)
Coupled with this, you have to count in the fact that Cypress was 7-8 months old tech when Fermi finally launched.

Yea right... problem is, AMD's hardware doesn't scale with tessellation.
Look at the 5770 and 5870: 5870 has much bigger die size, but its tessellator is exactly 0% faster. AMD could throw another few dozen cm^2 of die-size at it, and still they wouldn't be any faster at tessellation.
Get a clue. There's more to designing a GPU than just throwing more transistors at the problem.
And who cares that Cypress was 7-8 months old? Barts is about 7-8 months newer than Fermi, and still it's worse.
 
Scali, this is where I will disagree. You were pretty civil for a while but now it seems you just don't like AMD. Why do you care if AMD gets better performance? If you don't care about them, that shouldn't affect you. nVidia is also competing with far larger technology (500mm^2 + to 250ish for Barts) so one could argue AMD's is better because of its efficiency.

Also AMD backed DX11 games give good performance on both cards and shows no evidence of AMD specific optimizations. How is more games supporting a new standard bad? That is what we want AMD and nVidia to do, back games without locking out features or drastically reducing performance on competitor's cards.

Also, your whole boohoo paragraph does not relate at all to what I said. Ubisoft wants profits but their game runs slowly on the majority of newer hardware. People wait until new cards come out and buy the game later at a reduced price. Why would Ubisoft want this? They want profits but it seems they are either acting stupidly or there is something else at work here because they are just giving up Day 1 sales at no loss to themselves.


Barts also doesn't compete with Fermi (GF100) on price, power, or chip size. And if AMD wanted a 500mm chip, I would bet it would beat all the nVidia chips. Even an 6870X2 beats all the nvidia cards right now.
 
Last edited:
Let's look at it another way. While Nvidia labored long and hard over Fermi, AMD jumped in with a tessellator that was "good enough" because they were the only DX11 game in town for six months which allowed them to capture the DX11 market. Now that Nvidia has their superior tessellator, AMD is on their 2nd generation with a faster and cheaper GPU. And they improved their tessellator significantly ... so DX11 H.A.W.X. 2 will be "good enough" for the guys running their $169 HD 6850 at 1920x1200 at 8xAA, at 70 FPS average, no less.

AMD made a design decision as Nvidia did. AMD is counting on the assumption that a superior tessellator won't make a difference in practically playing a PC game until their next gen NI GPU is out on the 28 nm process. If you want to play Metro 2033 smoothly on a GTX 480, just turn off tesselation 😛

Ubi has not ignored AMD; they have insure that their new game plays great on AMD HW with everything maxed out. The Nvidia cards are perhaps overkill 🙂

Well, boohoo then.
AMD should just have made better cards then. And if people aren't satisfied with the performance of their AMD cards, perhaps they should just buy nVidia.
Why is everyone trying to find the best possible solution for AMD?
Who cares about AMD anyway? This is about competition and technical progress. nVidia is competing with superior technology. If AMD wants to do something about it, they should improve their tessellator.

Really, no offense, but are you people all n00bs or something? Game developers have NEVER waited for technology to trickle down into the mainstream before using it.
It's all about cutting-edge features. We've seen it numerous times... for example with 32-bit code, when the majority still had 16-bit machines. Or 3D accelerator support, when the first Voodoo's were only just on the market... or soundcard support, when hardly anyone even knew what a soundcard was in the first place. Or a more recent example... how about DX11 games? Hardly anyone had Windows Vista/7 and a DX11 card, yet AMD made sure that a lot of titles supported DX11 shortly after their launch of the 5000-series (which somehow was not a bad thing, even though nVidia did not have DX11 hardware until months later?)
 
Last edited:
If true that they didnt implement the optimastions that would benefit both card manfacturs is true, but took money to do it nvidias way, then I hope it bites them in the arse (and they have a hard time selling their game, because 85%+ market use amd cards).

Also, why would Ubisoft want to take advantage of the nVidia DX11 market when its 6 to 9 times smaller than AMD's? Isn't that a little strange?

The game can be played in DX-10 also. DX-10 graphics market is much larger than DX-11.

Still, there are about 25 Million AMD cards that won't perform well on this (more like 10 million when you get rid of the low end stuff).

http://www.hardware.fr/medias/photos_news/00/29/IMG0029781.gif

10 million users will enjoy the game with fewer FPS than NV, you people have to understand that the game is playable in AMDs 5000 and 6000 series with Tessellation and 4000 in DX-10. It just runs faster in NV GF400 series, that’s all.
 
This thread has more hits/responses than the actual "reviews" thread. lol.

http://forums.anandtech.com/misc.php?do=whoposted&t=2114612

beating_a_dead_horse.jpg
 
But isn't one of the selling points of the game the DX11 features? And it still questions why you would knowingly hamper 10 million people who might buy your game. It just doesn't make much sense to me unless there is some deal with nVidia.
 
Why do you care if AMD gets better performance?

The only way AMD can get better performance is by lower image quality. I don't want games to have lower image quality just because one company can't design better hardware. This affects me, obviously.
What does not affect me is games that look as good as they can on nVidia and AMD hardware, even if that means that AMD hardware needs a slightly lower quality setting (everyone seemed fine with nVidia DX10 cards vs AMD DX11 cards as well, right?)

nVidia is also competing with far larger technology (500mm^2 + to 250ish for Barts) so one could argue AMD's is better because of its efficiency.

I don't care about that. I only care about things I actually notice, like price and power consumption.
nVidia has a long history of selling larger dies than AMD, while maintaining competitive pricing and power consumption.

Also AMD backed DX11 games give good performance on both cards and shows no evidence of AMD specific optimizations.

Ironically enough, many of these DX11 games ended up running better on nVidia's hardware than AMD's.
There is also no evidence of nVidia-specific optimizations in HAWX 2 or Civ5. They just use the tessellation feature of DX11, which happens to work better on nVidia hardware, because nVidia has a far more advanced tessellator. That's not nVidia's fault, nor is it the developer's fault that they want to extract as much performance and IQ from the cards as they possibly can. That's what we all want, isn't it?

Also, your whole boohoo paragraph does not relate at all to what I said. Ubisoft wants profits but their game runs slowly on the majority of newer hardware.

The majority of gamers (over 90%) does not have DX11 hardware in the first place. That was my point.
If what you're saying is true, we shouldn't have DX11 games in the first place. But apparently we do, so your idea of how the gaming market works is apparently wrong.

Barts also doesn't compete with Fermi (GF100) on price, power, or chip size. And if AMD wanted a 500mm chip, I would bet it would beat all the nVidia chips. Even an 6870X2 beats all the nvidia cards right now.

Right, I see you have NO idea of what the PolyMorph engine is.
 
so DX11 H.A.W.X. 2 will be "good enough" for the guys running their $169 HD 6850 at 1920x1200 at 8xAA, at 70 FPS average, no less.

That's what I said.
The problem is that 70 fps is considerably lower than what nVidia cards are getting, and the AMD camp cannot accept that they're playing second fiddle when it comes to tessellation.

AMD made a design decision as Nvidia did. AMD is counting on the assumption that a superior tessellator won't make a difference in practically playing a PC game until their next gen NI GPU is out on the 28 nm process.

And obviously nVidia made the decision to try and get as much tessellation into as many games as they possibly can.
In a few years, we'll be able to tell whose strategy turned out to be the most successful.
Right now, all we see is AMD PR panicking because nVidia's strategy is starting to play out in actual games, and it doesn't make AMD look good.
 
Once again, how does an AMD driver which you don't have to use affect an nVidia or AMD user?

You honestly believe AMD is so inept that with double the transistor budget, they would still lose? We shall find out soon enough Scali.

Also what does the PolyMorph engine have anything to do with CF 6870's beating the 480 GTX handily with a similar amount of transistors, price, and power consumption? Doesn't that mean AMD is superior?

Also, your hate of AMD is getting kind of bothersome. You keep saying that AMD is not to be trusted with their performance boost when we have no clue. We can see when they release the driver whose right and whose wrong.

I don't think AMD PR is panicking and AMD's top cards aren't even out yet. If you want to refer to everyone who doesn't believe nVidia is superior in every way as the AMD camp thats fine. But even in tessellation heavy games, AMD cards are usually just as playable as nVidia cards.
 
Last edited:
What are you talking about?

H.A.W.X. 2 runs great in DX 11 with tesselation on at 1920x1200 - and with 8xAA - 71 FPS with a HD 6850 - a $169 card. That is plenty good enough, unless i am reading the chart wrong.

It's going to be awhile before Scali gets his tesselation fantasy realized in PC gaming. 😛

But isn't one of the selling points of the game the DX11 features? And it still questions why you would knowingly hamper 10 million people who might buy your game. It just doesn't make much sense to me unless there is some deal with nVidia.
 
Once again, how does an AMD driver which you don't have to use affect an nVidia or AMD user?

No, we're talking about AMD wanting to influence game developers to replace their tessellation code with AMD's 'optimized' version (whatever that means).
Drivers are a different matter.

You honestly believe AMD is so inept that with double the transistor budget, they would still lose? We shall find out soon enough Scali.

As I say, unless they have something like the PolyMorph engine (which I have seen no evidence of, so far), then yes, they're still going to lose. 5770 vs 5870 clearly proves my point.

Also, your hate of AMD is getting kind of bothersome.

I was about to say the same of the personal attacks and mudslinging.
I don't hate AMD. Heck, it's a company, not a person. Why would I have any kind of emotional hangup such as hatred, about any company?
 
I didn't really read the benchmarks. Everyone made such a big stink that I thought AMD was getting like 20 fps or lower when nVidia had 60+. In that case, while nVidia has an advantage, does it really matter? When the first games where nvidia is playable and amd cards aren't comes out, then we can talk.

Still didn't comment on the CF 6870 example. The 5 series is last gen now. If you just put 2 Barts on one die (think the old intel quad and dual cores), that would beat all nVidia cards.

And it really seems like you do have an emotional hangup on how they are not to be trusted. If you had no emotional hangups, it would be pretty logical to think AMD is not stupid enough to lie like this. Besides, you win anyway, Ubisoft is not adding the code. Nvidia obviously influences game developers with their own code so I don't see why AMD helping (like you say it should instead of whining at nvidia) is bad now. Double standards?
 
Last edited:
It's going to be awhile before Scali gets his tesselation fantasy realized in PC gaming. 😛

Perhaps, but you have to start somewhere.
I think it's pretty obvious that tessellation is here to stay though. Future games and future hardware will only focus more on tessellation performance, as it's the obvious way forward for both performance (reduce memory footprint and bandwidth requirements for geometry, and reduce workload for various aspects of animation and shading) and image quality (higher polycount, advanced LOD scaling, better antialiasing).
 
You honestly believe AMD is so inept that with double the transistor budget, they would still lose?

Also what does the PolyMorph engine have anything to do with CF 6870's beating the 480 GTX handily with a similar amount of transistors, price, and power consumption? Doesn't that mean AMD is superior?

My friend, we talking about TESSELLATION performance in this thread and not DX-9/10. Yes in other games AMDs 5000 and 6000 series are much more efficient than GF400 series but in DX-11 and Tessellation AMDs architecture is not as efficient as NVs architecture is, plain and simple.
 
Agreed. You are talking the future. AMD agrees. But for the here and now, they have brought an inexpensive GPU with an improved tessellator which is good enough for current and near-future games. It was an AMD design choice.

AMD plays H.A.W.X. 2 at good frame rates with tessellation on even with their midrange GPUs - 5700 class. i'd say Cayman or Antilles will have no trouble with DX 11 games for the next couple of years even though Nvidia may technically have superior tessellation. Long before tessellation becomes an issue for gamers, NI will be introduced.



Perhaps, but you have to start somewhere.
I think it's pretty obvious that tessellation is here to stay though. Future games and future hardware will only focus more on tessellation performance, as it's the obvious way forward for both performance (reduce memory footprint and bandwidth requirements for geometry, and reduce workload for various aspects of animation and shading) and image quality (higher polycount, advanced LOD scaling, better antialiasing).
 
Back
Top