Second Thoughts on Getting a 7870

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
Yeah after the Crysis 2 Tessellation debacle I'm somewhat hesitant when it comes to future Cryengine games. And some random MMO that looks poorly optimized from that graph does not inspire confidence. You make some good points about the OP keeping the card for 3 years as the 670 will last longer, and if he can afford it by all means go with it but I was just making a point about gaming at 1080P being fast enough with lots of eye candy with a 7850.

Heck I'm currently running 5760/1200 on my overclocked 7850 and it handles most games (including BF3) rather well, I just got to lay off traditional AA and stick with FXAA.

As far as future game engines go, I would guess the new Unreal Engine will likely be one of the most important one's for both teams to optimize for considering how heavily it'll be used by console developers.
 

Gordon Freemen

Golden Member
May 24, 2012
1,068
0
0
Ya, it is overhyped when people say HD7850 can reach the high-end cards. People make it sound like it can be overclocked to surpass GTX580 but the GTX580 has 20% overclocking headroom too which makes it as fast as a stock 7970. Actually in BF3, GTX580 even beats an HD7950. An overclocked 7850 cannot touch HD7970 on air cooling and barely comes close to a 7950. So not even an overclocked 7850 can beat an overclocked 580. HD7850 is great for $250 since NV has no alternative at that price, but it will hopelessly lose to an overclocked 7950/GTX670 by a lot. That's why those cards cost $400.

This is not the same situation like HD6950 which literally unlocked and overclocked to match a 6970. HD7850 overclocks 30%, but so does 7950 and GTX670 @ 1250mhz will still be 30-40% faster. Actually GTX580 is 30% faster at stock. It's simply not possible to make HD7850 perform like the higher end cards since they also have overclocking headroom. It's a good card but it's overhyped from that perspective. In other words, people make it sound like HD7850 is the next GeForce 4 Ti 4200 or X800GTO2 or 9500Pro. It's good, but not that good. Those cards could be within 10% of an overclocked Ti 4600, X800XT and 9700Pro.



Good choice. Twin Frozr cooler is sweet.
At stock the 7850 has the performance of a GTX 480 enough said.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
In one post you decry consolification and in the next you make it sound like we won't be limited by cross platform development.

Perhaps I didn't clarify the time frame then.

I believe both of those statements are true. Right now PC gaming is held back by current consoles, but I imagine by 2014 games will be more demanding since when new consoles launch we'll have new games targeting them. At the same time the sequels to Batman, Dirt and Crysis games will run much faster on GTX670 even if they are console ported because they will have a DX11 layer (i.e., tessellation).

If you are OK disabling 4xMSAA in deferred game engines or dialling down on tessellation, then the visuals of modern games are still no better than Crysis 1 (which is why I am saying PC games are getting held back). Without DOF, tessellation, HD7850 is perfectly fine but is no faster than a 6970/570 at stock. Since OP intends to keep his card for 3 years, HD7850 won't cut it. It is also probably true that GTX670 will be outdated in 3 years as well but given the large performance delta in the latest games, it seems HD7850 will be a slideshow. Personally I'd rather upgrade more frequently and which is why HD7850 is not a bad card but even in today's games HD7850 is much slower than a GTX670. In fact at stock speeds HD7850 even struggles to convincingly outperform a GTX570. Only with overclocking it becomes a great card.

I honestly think you are giving the 7850 way too much credit. Only with overclocking it becomes a great card. At stock speeds it's basically = 6950.

At stock the 7850 has the performance of a GTX 480 enough said.

No, it's still slower. Computerbase has it slower than 6950 by 1% while TPU has it 1% faster than 6950. HD7850 = HD6950 and GTX480 is faster than both of those.
http://www.techpowerup.com/reviews/MSI/HD_7870_HAWK/28.html

Besides, GTX480 was on sale on Newegg for $175-225 last fall, and then many times for $200-250. So the fact that HD7850 is nearly as fast for $250 isn't an accomplishment from a performance standpoint. The only impressive part is its power consumption, which is a function of the 28nm node shrink and removal of AMD's forte - double precision performance.

It's also interesting how so many people claim that GCN is a compute architecture but HD7850 and 7870 have their double precision compute performance completely neutered.

Double Precision
HD6970 = 676 GFlops
HD6950 = 563 GFlops
HD5870 = 544 GFlops
GTX480 = 168 Gflops
HD7870 = 160 GFlops
HD7850 = 110 GFlops


Ya, like I said, HD7800 series is overhyped. If AMD shrunk HD6970 to 28nm and doubled the tessellation engines, it would have mopped the floor with HD7870.

Heck I'm currently running 5760/1200 on my overclocked 7850 and it handles most games (including BF3) rather well, I just got to lay off traditional AA and stick with FXAA.

Agreed. Without 4xMSAA and all the DX11 features maxed, HD7850/6950 2GB/6970 are still decent mid-range cards.
 
Last edited:

Gordon Freemen

Golden Member
May 24, 2012
1,068
0
0
It's because I believe both of those statements are true. Right now PC gaming is held back by current consoles, but I imagine by 2014 games will be more demanding since when new consoles launch we'll have new games targeting them. At the same time the sequels to Batman, Dirt and Crysis games will run much faster on GTX670 even if they are console ported if you want AA+tessellation.

If you are OK disabling 4xMSAA in deferred game engines or dialling down on tessellation, then the visuals of modern games are still no better than Crysis 1. In that sense, PC games are held back by consoles and HD7850 will be just fine. But since OP intends to keep his card for 3 years, HD7850 won't cut it. It is also probably true that GTX670 will be outdated in 3 years as well. Personally I'd rather upgrade more frequently and which is why HD7850 is not a bad card but even in today's games HD7850 is much slower than a GTX670. In fact at stock speeds HD7850 even struggles to convincingly outperform a GTX570. Only with overclocking it becomes a great card.
Crysis 1 looked better than Crysis 2 and the fact is the Cryengine 3 is just not quite as good as the purely PC only Cryengine 2 cause like you say Consoles are holding the PC back. To really max out any and all games a dual card solution is in order and even then there are the sticklers like Metro 2033 that just never run good cause of optimization issues and I blame the Consoles for that.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I believe you have repeatedly stated that it's better to continually upgrade midrange video cards rather than buy and hold the current champ. You said this in the context of how foolish it is to try to futureproof, because what seem like large performance differences between, say, GTX 570 and 580 will shrink to irrelevance over time as they both go obsolete one after the other. I agree with that.

There is a raging debate over in CPU right now about Intel and one thing that has been expressed is how there are no killer apps and that cloud computing may be a threat to Intel's client side sales, even if they make money on servers.

Where is the killer app for tessellation?

There isn't one.

And there will not be one if trends continue: gamedevs in a moribund economy feeling pressure to make mass-market games that run on more than just high-end rigs. Even Crytek succumbed to that pressure. So with current gen consoles stuck on DX9, you can expect all games from now through 2013 to be written primarily for DX9. Any DX10+ features are just gravy and will not materially change gameplay and probably won't materially improve image quality (see Metro 2033 of an example of a game that had settings that barely moved image quality but killed framerates, including unnecessary and near-imperceptible-in-real-time tessellation).

After 2013, next gen consoles may start showing up, and they are supposedly all running AMD GPUs or derivations thereof, with multiple rumors saying HD6670 or the 7-series equivalent thereof. Any guesses as to how gamedevs will respond?

Yes, that is correct: they will either not use tessellation or will use it sparingly. Because they will continue to write games for the lowest reasonable common denominator and maybe tack on a few token extra effects to placate enthusiasts.

So not only is there no killer app for tessellation today, there appears to be no room for heavily tessellated monsters in the vast majority of games in the future, either.

This is why I rate GPU Boost, Adaptive VSYNC, and single-GPU multi-monitor as delivering more benefits than better handling of massive tessellation. You can use GPU Boost today in all games. Ditto AVSYNC. And even many games can be played on multi-monitor. But tessellation that materially improves game experiences today above and beyond what you can get without tessellation? Rare. Maybe even nonexistent. Even if tessellation would help a little, would MASSIVE tessellation really deliver that much more? Probably not. Diminishing marginal returns.

The GTX 670 is superior to the HD 7850 not for reasons of tessellation, but for reasons of raw speed, AVSYNC, GPU Boost, single-GPU multi-monitor, and even CUDA/PhysX. Yes, I think even PhysX matters more than massive tessellation right now, because at least you can get fog effects and such in a few games right now that add more to game experience than I've seen tessellation add, at least so far.

But it also costs 60% more, though, so it's in a different price bracket than the 7850.

I think it's fair to include overclocking when evaluating the 7850 btw, like it would be fair to include oc headroom when evaluating the gtx 460. Sometimes it makes less sense to do so, but in the 7850 it's a no-brainer to hit 1.05GHz with no overvolting and no change of fan profiles or anything. That's a 22% overclock, and it's brain dead simple to drag CCC slider to 1.05GHz. With a little extra voltage you can easily hit higher speeds. (As I recall, you linked to x-bit lab's writeup of how a heavily oc'd 460 could actually beat a 5870 in some games, once. Yet you shrug at the 7850's well-documented, even-greater-than-gtx-460 oc headroom?)

Back to OP, if he holds onto the gtx 670 for 3 years it will be obsolete before the end of that timeframe, tessellation advantage or no. 7850 will go obsolete even faster, but it would cost $150 less. That's significant. You could buy 3/4 of a i5-3570K with that, or a nice fast SSD, etc.

If OP were under tighter fiscal constraints, maybe he could wait for the GTX 660 to show up before deciding, but that isn't going to happen for a while. In any case, we talked him out of getting an overpriced 7870 so good job, AT Forums! :)

Perhaps I didn't clarify the time frame then.

I believe both of those statements are true. Right now PC gaming is held back by current consoles, but I imagine by 2014 games will be more demanding since when new consoles launch we'll have new games targeting them. At the same time the sequels to Batman, Dirt and Crysis games will run much faster on GTX670 even if they are console ported because they will have a DX11 layer (i.e., tessellation).

If you are OK disabling 4xMSAA in deferred game engines or dialling down on tessellation, then the visuals of modern games are still no better than Crysis 1 (which is why I am saying PC games are getting held back). Without DOF, tessellation, HD7850 is perfectly fine but is no faster than a 6970/570 at stock. Since OP intends to keep his card for 3 years, HD7850 won't cut it. It is also probably true that GTX670 will be outdated in 3 years as well but given the large performance delta in the latest games, it seems HD7850 will be a slideshow. Personally I'd rather upgrade more frequently and which is why HD7850 is not a bad card but even in today's games HD7850 is much slower than a GTX670. In fact at stock speeds HD7850 even struggles to convincingly outperform a GTX570. Only with overclocking it becomes a great card.

I honestly think you are giving the 7850 way too much credit. Only with overclocking it becomes a great card. At stock speeds it's basically = 6950.
 
Last edited:

Gordon Freemen

Golden Member
May 24, 2012
1,068
0
0
I believe you have repeatedly stated that it's better to continually upgrade midrange video cards rather than buy and hold the current champ. You said this in the context of how foolish it is to try to futureproof, because what seem like large performance differences between, say, GTX 570 and 580 will shrink to irrelevance over time as they both go obsolete one after the other. I agree with that.

There is a raging debate over in CPU right now about Intel and one thing that has been expressed is how there are no killer apps and that cloud computing may be a threat to Intel's client side sales, even if they make money on servers.

Where is the killer app for tessellation?

There isn't one.

And there will not be one if trends continue: gamedevs in a moribund economy feeling pressure to make mass-market games that run on more than just high-end rigs. Even Crytek succumbed to that pressure. So with current gen consoles stuck on DX9, you can expect all games from now through 2013 to be written primarily for DX9. Any DX10+ features are just gravy and will not materially change gameplay and probably won't materially improve image quality (see Metro 2033 of an example of a game that had settings that barely moved image quality but killed framerates, including unnecessary and near-imperceptible-in-real-time tessellation).

After 2013, next gen consoles may start showing up, and they are supposedly all running AMD GPUs or derivations thereof, with multiple rumors saying HD6670 or the 7-series equivalent thereof. Any guesses as to how gamedevs will respond?

Yes, that is correct: they will either not use tessellation or will use it sparingly. Because they will continue to write games for the lowest reasonable common denominator and maybe tack on a few token extra effects to placate enthusiasts.

So not only is there no killer app for tessellation today, there appears to be no room for heavily tessellated monsters in the vast majority of games in the future, either.

This is why I rate GPU Boost, Adaptive VSYNC, and single-GPU multi-monitor as delivering more benefits than better handling of massive tessellation. You can use GPU Boost today in all games. Ditto AVSYNC. And even many games can be played on multi-monitor. But tessellation that materially improves game experiences today above and beyond what you can get without tessellation? Forget about it.

The GTX 670 is superior to the HD 7850 but not for reasons of tessellation, but rather, for reasons of raw speed, AVSYNC, GPU Boost, single-GPU multi-monitor, and even CUDA/PhysX. Yes, I think even PhysX matters more than massive tessellation right now, because at least you can get fog effects and such in a few games right now that add more to game experience than I've seen tessellation add, at least so far.

But it also costs 60% more, though, so it's in a different price bracket than the 7850.

I think it's fair to include overclocking when evaluating the 7850 btw, like it would be fair to include oc headroom when evaluating the gtx 460. Sometimes it makes less sense to do so, but in the 7850 it's a no-brainer to hit 1.05GHz with no overvolting and no change of fan profiles or anything.

Back to OP, if he holds onto the gtx 670 for 3 years it will be obsolete by the end of that timeframe, tessellation advantage or no. 7850 will be even more obsolete, but it would cost $150 less. That's significant. You could buy 3/4 of a i5-3570K with that, or a nice fast SSD, etc. 22% overclock, brain dead simple. With a little extra voltage you can easily hit higher speeds.

If OP were under tighter fiscal constraints, maybe he could wait for the GTX 660 to show up before deciding, but that isn't going to happen for a while. In any case, we talked him out of getting an overpriced 7870 so good job, AT Forums! :)
Agreed +1 and I will add that the Nintendo Wii U is apparently running the Radeon HD 4850 512mb GPU or a variant of it FYI.
 

iCyborg

Golden Member
Aug 8, 2008
1,324
51
91
It's also interesting how so many people claim that GCN is a compute architecture but HD7850 and 7870 have their double precision compute performance completely neutered.

Double Precision
HD6970 = 676 GFlops
HD6950 = 563 GFlops
HD5870 = 544 GFlops
GTX480 = 168 Gflops
HD7870 = 160 GFlops
HD7850 = 110 GFlops
First of all, you're only comparing 7800 series here and generalizing to the whole GCN, even though FP64 performance is reduced for market segmentation reasons. nVidia does this artificial limitation of FP64 as well.
Second of all, compute is not all about FP64. If you look at the compute benches from the same link, you see 7850 handily beating 6970.

Ya, like I said, HD7800 series is overhyped. If AMD shrunk HD6970 to 28nm and doubled the tessellation engines, it would have mopped the floor with HD7870.
Perhaps in games and only at mid-range, they'd need Tahiti for high end if they want to catch up with compute. And developing a new arch on a new node + doing VLIW4 shrink takes resources.
Anyway, you've given graphs showing how 5870 beats 4890 by a bigger margin than at launch, so 6970 vs 7870 may be different in 1-2 years...
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Crysis 1 looked better than Crysis 2 and the fact is the Cryengine 3 is just not quite as good as the purely PC only Cryengine 2 cause like you say Consoles are holding the PC back. To really max out any and all games a dual card solution is in order and even then there are the sticklers like Metro 2033 that just never run good cause of optimization issues and I blame the Consoles for that.

I agree with everything that you said. :)

I believe you have repeatedly stated that it's better to continually upgrade midrange video cards rather than buy and hold the current champ.

Yup, definitely. Most times I'd recommend getting an HD7850 for $250 and overclocking it for 1080P, saving $150 and rolling over $150 and upgrading in 2 years. Since OP stated he didn't feel like upgrading for at least 3 years until his next build, I didn't give that advice.

You said this in the context of how foolish it is to try to futureproof, because what seem like large performance differences between, say, GTX 570 and 580 will shrink to irrelevance over time as they both go obsolete one after the other. I agree with that.

Ya, $100 from GTX670 to 680 isn't going to make 680 last longer. But HD7850 is significantly slower for $150 less. In that case, I think the $150 extra outlay for the 670 is actually not awful. HD7850 vs. GTX670 (~60% delta, 30-40% delta at least with overclocking) is not at all like GTX570 vs. 580 (15% delta).

Where is the killer app for tessellation?

There doesn't need to be one. In this case GTX670 is on average ~ 50-60% faster than a 7850. The performance is there immediately without a killer tessellation app. All I am saying is if more games have tessellation next year, GTX670 will only extend its lead.

Even Crytek succumbed to that pressure. So with current gen consoles stuck on DX9, you can expect all games from now through 2013 to be written primarily for DX9.

All Dice games will be DX11 from now on and Crytek has already stated Crysis 3 is made with DX11 from the ground-up. They are planning on adding dynamic tessellation.

Dirt games = all in DX11.
Civilization V expansion = DX11.
Medal of Honor: Warfighter on Frostbyte 2.0 engine = DX11.
F1 2012 = DX11

I think what you are saying is DX11 performance hit isn't worth the small improvements in visuals? I'll agree with that but it won't make next Battlefield or Crysis games run faster on the 7850, which is the entire reasoning for getting a 670 for someone who plans to use the GPU for 3 years.

Borderlands 2 is going to have PhysX and cloth simulation effects. I am not saying it's ground breaking, but these little things separate the high-end GPUs from mid-range. Otherwise most people would still be using HD6950/GTX570 for 1080P.

Any DX10+ features are just gravy and will not materially change gameplay and probably won't materially improve image quality (see Metro 2033 of an example of a game that had settings that barely moved image quality but killed framerates, including unnecessary and near-imperceptible-in-real-time tessellation).

I am not disagreeing with that per say. However, now you are just presenting an argument why we don't even need to upgrade from last generation of GPUs. So you recommend getting HD7850 over 670 then for the OP?

Didn't you get an HD7970 over an HD7870? That's about $150 extra and the performance delta between HD7970 and 7870 is less than it is between GTX670 and HD7850. However, you don't think GTX670 is worth another $150 for 50-60% more performance in OPs case given his desire to keep the card for 3 years?

After 2013, next gen consoles may start showing up, and they are supposedly all running AMD GPUs or derivations thereof, with multiple rumors saying HD6670 or the 7-series equivalent thereof. Any guesses as to how gamedevs will respond?

GTX670 is at least 50% faster now. Chances are if games get more advanced (i.e., more tessellation), GTX670 will continue to have a 50%+ lead over the 7850.

Yes, that is correct: they will either not use tessellation or will use it sparingly.

I think this is where we are having a misunderstanding. Even without tessellation, GTX670 is much faster in games such as SKYRIM, Shogun 2 and BF3 than the 7850. IF games get more demanding in terms of tessellation, the gap will only widen more. So worst case scenario, GTX670 will be 40-60% faster than HD7850 and if games get more tessellation heavy, HD7850 will be 70-100% slower.

BF3.png

Shogun.png


So not only is there no killer app for tessellation today, there appears to be no room for heavily tessellated monsters in the vast majority of games in the future, either.

In that case, why not get an a $200 HD6950 2GB, turn down all the DX11 features and play in DX9/DX10 with no AA? I am not getting how this is helping the OP who intends to keep the card for 3 years. I already stated most games today barely look better than Crysis 1 without HBAO, tessellation and depth of field, if at all.

But tessellation that materially improves game experiences today above and beyond what you can get without tessellation? Rare. Maybe even nonexistent.

Ok so if Tessellation is not important, then many people on our forum wouldn't have upgraded at all from HD5850 OC / HD5870 for 1080P. If tessellation is the way forward, you are giving very dangerous advice since HD5870 is unusuable in todays games because of this. I heard the exact thing you are saying now 2 years ago when Fermi came out. People said tessellation is a gimmick and GTX470/480 will be worthless for games that use it.

46428.png


The GTX 670 is superior to the HD 7850 but not for reasons of tessellation,

That's a huge part of it. Look at GTX670 vs. HD5870/HD6950/6970/7870/7950 in games that use Tessellation and it becomes evident it's a factor. If you turn down tessellation, most modern cards are not that much faster @ 1080P in a lot of modern games such as Crysis 2, Batman AC, Civilization V than say HD6970/HD5870. In fact, all those games are 100% playable on last gen cards if you lay off tessellation.

I think it's fair to include overclocking when evaluating the 7850 btw, like it would be fair to include oc headroom when evaluating the gtx 460. Sometimes it makes less sense to do so, but in the 7850 it's a no-brainer to hit 1.05GHz with no overvolting and no change of fan profiles or anything.

Yes and I already included HD7850 OCed results. At 1050mhz, it's barely faster than a GTX570:

bf3-fps-oc.png


Even if you consider HD7850 @ 1300mhz, that still only as fast as an HD7950. GTX670 OC would smoke it by 30-40% still. You are making a case for HD7850 and saving $150 but a GTX670 OC is as fast as your card overclocked, all for $400! So if you think GTX670 is poor value at $400, HD7970 is a horrendous value at $450-500.

3113


Look, I realize that you have an HD7970 but I don't think you realize how fast a GTX670 is and you are overhyping how fast HD7850 is at the same time.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I'm going to try to keep my response relatively brief.

The reason why I recommended a 7850 was because OP asked about a 7870. If he'd be happy with that level of performance then a 7850 + OC would be a natural recommendation to make and would save $100. I didn't know his budget would go up to $400 at which point a 670 is a not-awful bargain in the sense that he wouldn't drop horribly down in price/perf. But it'd still be a drop. Personally I would just get the 7850, pocket the $150, and then re-upgrade in a year or two and sell the 7850.

I got the 7970 for some bizarre reasons that don't apply to most people, including need for compute and also the mini-Displayport adapter that comes with most 7970 boxes, since I sold my DP->VGA adapter (I had enough of analog) and needed something to replace it in order to regain Eyefinity ability. I also get a significant discount for now, lessening the price delta between all video cards. If I did not have those weird factors, I would not get ANY of the cards discussed. I would instead wait patiently for GTX 660 and see how that performs, first. Unless NV really castrates the GTX 660, a single GTX 660 might be enough for most games I'm likely to play, at 1080p. And I might be able to play some older games on Medium at 5760x1080 (I play mostly Source games these days).

What I'm saying about tessellation is that low to moderate tessellation may be beneficial, but extreme tessellation a) won't be coded for by most developers due to reasons already discussed, and b) so far has not produced much of a "wow factor" beyond technical demos. It's actually so bad that you could even go DX9 for most games today and not miss much.

This may change in the future, but I think you are way overestimating the speed of that change. Just because a game dev claims to code for DX11 doesn't mean they will use massive tessellation or whatever. Again, for reasons already discussed.

A 670 is a better card than a 7850 for reasons unrelated to tessellation.

Let me put it this way to you: let's say that a 670 and 7850 were the same price, same performance, same wattage, oc headroom, etc., and the 7850 even magically had all the NV-exclusives like PhysX. The only difference being that the 670 were somewhat better at handling extreme tessellation. How much extra money would you be willing to pay for the 670? I would pay maybe $5. Any more than that and I would rather just keep the money and use less-extreme tessellation and not miss much if anything and have more money left over for my next GPU upgrade.

You may value it more than $5 and that's fine... and in the case where someone is hellbent on NOT upgrading for 3+ years maybe it's worth a little extra. But we both know that's a silly way to do things and that continual upgrading is better in the long run.

Yes I do in fact know how fast a 670 is. Overclock vs overclock, at all but the highest resolutions, it is tied or almost tied with the 7970. At higher resolutions the 7970 pulls away a little. A 7850 isn't as bad as you are making it out to be once it's overclocked. A heavily oc'd 7850 is pretty close to a stock GTX 580 which is the last-gen high-end GPU and no slouch. You can find games in which is really suffers, but in others it does better. Your hypothesis is that the delta will grow between a 7850 and 670 over the next three years, but given consolification's pervasiveness, I'm not holding my breath on that.

What we know today is that it costs 60% more than a 7850 and is not 60% faster, especially once both are overclocked.

We also know that the 680 was supposed to be the midrange part, and in many people's eyes, now is a bad time to buy allegedly high-end parts because they don't move the price/perf needle as much as expected. Thus it's curious to do the whole "buy high end and hold for three years" strategy at this particular moment in time.

I think what you are saying is DX11 performance hit isn't worth the small improvements in visuals? I'll agree with that but it won't make next Battlefield or Crysis games run faster on the 7850, which is the entire reasoning for getting a 670 for someone who plans to use the GPU for 3 years.

Borderlands 2 is going to have PhysX and cloth simulation effects. I am not saying it's ground breaking, but these little things separate the high-end GPUs from mid-range. Otherwise most people would still be using HD6950/GTX570 for 1080P.

I am not disagreeing with that per say. However, now you are just presenting an argument why we don't even need to upgrade from last generation of GPUs. So you recommend getting HD7850 over 670 then for the OP?

Didn't you get an HD7970 over an HD7870? That's about $150 extra and the performance delta between HD7970 and 7870 is less than it is between GTX670 and HD7850. However, you don't think GTX670 is worth another $150 for 50-60% more performance in OPs case given his desire to keep the card for 3 years?

GTX670 is at least 50% faster now. Chances are if games get more advanced (i.e., more tessellation), GTX670 will continue to have a 50%+ lead over the 7850.

I think this is where we are having a misunderstanding. Even without tessellation, GTX670 is much faster in games such as SKYRIM, Shogun 2 and BF3 than the 7850. IF games get more demanding in terms of tessellation, the gap will only widen more. So worst case scenario, GTX670 will be 40-60% faster than HD7850 and if games get more tessellation heavy, HD7850 will be 70-100% slower.

BF3.png

Shogun.png


In that case, why not get an a $200 HD6950 2GB, turn down all the DX11 features and play in DX9/DX10 with no AA? I am not getting how this is helping the OP who intends to keep the card for 3 years. I already stated most games today barely look better than Crysis 1 without HBAO, tessellation and depth of field, if at all.

Ok so if Tessellation is not important, then many people on our forum wouldn't have upgraded at all from HD5850 OC / HD5870 for 1080P. If tessellation is the way forward, you are giving very dangerous advice since HD5870 is unusuable in todays games because of this. I heard the exact thing you are saying now 2 years ago when Fermi came out. People said tessellation is a gimmick and GTX470/480 will be worthless for games that use it.

46428.png


That's a huge part of it. Look at GTX670 vs. HD5870/HD6950/6970/7870/7950 in games that use Tessellation and it becomes evident it's a factor. If you turn down tessellation, most modern cards are not that much faster @ 1080P in a lot of modern games such as Crysis 2, Batman AC, Civilization V than say HD6970/HD5870. In fact, all those games are 100% playable on last gen cards if you lay off tessellation.

Yes and I already included HD7850 OCed results. At 1050mhz, it's barely faster than a GTX570:

bf3-fps-oc.png


Even if you consider HD7850 @ 1300mhz, that still only as fast as an HD7950. GTX670 OC would smoke it by 30-40% still. You are making a case for HD7850 and saving $150 but a GTX670 OC is as fast as your card overclocked, all for $400! So if you think GTX670 is poor value at $400, HD7970 is a horrendous value at $450-500.

3113


Look, I realize that you have an HD7970 but I don't think you realize how fast a GTX670 is and you are overhyping how fast HD7850 is at the same time.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
If you look at the compute benches from the same link, you see 7850 handily beating 6970.

Ya I know. In OpenCL, HD7850 is faster. I was replying in reference to HD7850 vs. 480. AMD took full advantage of 28nm node shrink and chopped off DP compute performance. So they got a cool running and small die in 7850. So it's not really surprising it can go against a 480 and consumes a lot less power since the chip is far less complex and uses a more advanced node process.

Anyway, you've given graphs showing how 5870 beats 4890 by a bigger margin than at launch, so 6970 vs 7870 may be different in 1-2 years...

Ya I know in regard to HD7870 vs. 6970, 6970 is toast when next gen games arrive. But in regard to 7850 vs. 670 which is the OP's 2 choices, I think it becomes a lot more difficult to ignore the performance delta if the OP intends to keep the card for 3 years.
 

iCyborg

Golden Member
Aug 8, 2008
1,324
51
91
Ya I know. In OpenCL, HD7850 is faster. I was replying in reference to HD7850 vs. 480. AMD took full advantage of 28nm node shrink and chopped off DP compute performance. So they got a cool running and small die in 7850. So it's not really surprising it can go against a 480 and consumes a lot less power since the chip is far less complex and uses a more advanced node process.



Ya I know in regard to HD7870 vs. 6970, 6970 is toast when next gen games arrive. But in regard to 7850 vs. 670 which is the OP's 2 choices, I think it becomes a lot more difficult to ignore the performance delta if the OP intends to keep the card for 3 years.
Barts vs Cayman, DP was chopped off completely.
It depends how much he plays, if he's willing to OC and is more a casual gamer, then he can save $150. Though with a $350 budget, I agree that 670 is the way to go unless this $350 is some really hard limit.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I'm going to try to keep my response relatively brief.

You could remove the graphs from my post to make yours a bit shorter! :D

The reason why I recommended a 7850 was because OP asked about a 7870.

He said his budget was up to $350. That's how the discussion of $400 670 started.

Let me put it this way to you: let's say that a 670 and 7850 were the same price, same performance, same wattage, oc headroom, etc., and the 7850 even magically had all the NV-exclusives like PhysX. The only difference being that the 670 were somewhat better at handling extreme tessellation. How much extra money would you be willing to pay for the 670? I would pay maybe $5-10.

See that's what you are missing: the differences in architectures and what's happening in modern games today.

I am going to address all of these points below.

(1) Tessellation

For example, Crysis 2 on Ultra + DX11 adds tessellation automatically. Why do you think HD6900 series and HD7800 series get hammered so much? You can't have GTX670 and HD7850 perform the same in a "hypothetical scenario" as you mentioned since there are 2 fundamental differences in the architecture that will ensure the Kepler is faster: tessellation and FP16 textures - part of modern DX11 games.

Those 2 things are a large part of the reason why GTX670 smokes 7850 in modern games by such a large delta - games that use Tessellation and FP16 textures (there are other reasons such as optimizations for drivers too). If you need more evidence, tessellation and FP16 are huge reasons why 670 smokes 7970 in some DX11 games (just like why HD7970's superior bandwidth allows it to lay waste to the 670 in memory bandwidth limited situation and with AA in high resolutions where memory bandwidth is a factor).

You keep ignoring tessellation as a non-factor but it's part of DX11 games and is partly WHAT makes GTX600 series so fast in modern games that have it!

Here is the evidence:

How do you explain GTX670 being 39% faster and GTX680 being 51% faster than GTX580 in Crysis 2? It sure isn't related to memory bandwidth or pixel performance which GTX580 has plenty vs. 670/680. It also cannot be explained by texture performance since it wouldn't explain why GTX670 is whopping HD7970 that has gobbles of texture performance.

gigabyte_gtx670w_crysis21920.jpg


The same if we revisit an older game such as Lost Planet 2
gigabyte_gtx670w_lp21920.jpg


So that's Tessellation covered.

(2) FP16 textures (64-bit textures)

I also noted another key advantage of the Kepler architecture: FP16 texture performance. You know which games uses FP16 textures? Dirt games for example, based on the EGO engine:

gigabyte_gtx670w_dirt31920.jpg


Again in all 3 of these cases, HD7850 is horrendously outclassed by the 670 because of 2 things that Kepler architecture excels at:

1) Tessellation performance
2) FP16 next generation texture performance.

Now before you call it a fluke, I'll even prove it to you using older cards.
Look at the specs of GTX570 vs. GTX480.
- GTX480 has more memory bandwidth, more pixel & shader performance, more VRAM, and GTX570 has a tiny texture fill-rate advantage.

Now can you explain to me how in the world can GTX570 beat GTX480 by a whopping 5 FPS in Dirt 2 at 2560x1600? This should not happen under any circumstances based on their specs.

Dirt2_03.png


Do you know why? Because of the FP16 enhancements of GF110 over GF100. GF114 can perform 4 Texels/clock vs. 2 Texels/clock in FP16 texture. It says it right here in the GF110 architecture breakdown. Dirt games use the EGO engine which uses FP16 textures. Dirt 2 also has tessellation which GF110 performs better at than GF100.

GF110 has also improved tessellation performance over GF100. "NVIDIA has improved the efficiency of the Z-cull units in their raster engine, allowing them to retire additional pixels that were not caught in the previous iteration of their Z-cull unit. Z-cull unit primarily serves to improve their tessellation performance by allowing NVIDIA to better reject pixels on small triangles. ~ Source

Ok so let's revisit tessellation with older cards in Metro 2033:

Metro_02.png


It should be impossible for GTX570 to beat GTX480 by that much. The answer? Improved Tessellation in GF110.

(3) Deferred MSAA (Frostbyte 2.0).

Ok so what about BF3? I think there is an explanation for that too. AMD's Cayman and Cypress took a larger performance penalty in deferred MSAA game engines than Fermi architecture did. This was investigated and proven by Bit-Tech using Battlefield 3. Architecturally, I haven't been able to find an explanation but it just could be that AMD stopped optimizing for MSAA as much due to MLAA. I don't know for sure. It looks like this hasn't changed much with GCN vs. Kepler which is why Kepler wins against 7970 in BF3.

In summary, it is my view that Kepler architecture has all 3 facets that are necessary for next generation games covered:

1. Tessellation
2. FP16 textures
3. Deferred MSAA

All 3 of these are going to be trending for next generation games. This is why it's very likely that AMD is working on improving at least on points #1 and #2 with GCN 2.0 / Enhanced because in current state GCN will fall apart rather quickly for next gen games.

Therefore, if I was betting, I'd say HD7850's performance will get much worse a lot sooner since it lacks in all 3 of those areas. No amount of overclocking will save 7850. Whether or not that extra performance is worth $150 depends on the person and his/her budget/upgrade frequency.

I guess it comes down to when you think next generation games will have more tessellation and higher resolution FP16 textures. I think it's already happening and why I think the performance delta between 7850 and 670 will only grow larger.

HD7850 is already falling apart in Dirt Showdown (EGO engine), while HD7950 can't even outperform the 580. Notice GTX570 again outperforming the 480?

dirt%20s%201920.png


In conclusion,

People on our forum are often quick to jump to the notion of "NV-biased games". However, if you dissect the architecture and look under the hood of the technology, it's not as simple as "NV paid more $ to make this game run faster". It sure appears to me that NV made Kepler architecture a lot more advanced for games than GCN is in its current form.

Theoretical tests even show that Kepler architecture does better in tessellation and FP16 textures and in my opinion, that's a large part of the reason why it's so fast in modern games despite 256-bit memory bandwidth.

tessmark-x32.gif

unigine.gif

b3d-filter-fp16.gif


Just my 2 cents. Feel free to present an opposing case.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
There is no question Kepler is the more advanced arch even if software may take years to catch up to it. You happen to think game devs will suddenly place priority on PC gaming. I'm much less optimistic and think they will continue to focus on consoles and slap a few extra effects on the PC version and call it a day, no matter what their PR people claim about DX11.

We were talking about tessellation but if you want to bring MSAA into it, HWC is down for maintenance but they usually have charts with and without MSAA. Check out HardOCP's 4x MSAA tests that they occasionally do too. It's not as big of a problem as you are making it out to be; it is not the main reason why 680 wins in BF3. Tess still not a real factor. I wouldn't pay much extra for it. Even if AMD is brute forcing its way (oc vs oc it's very close to GTX 680, see HardOCP's apples to apples for instance), it's surprisingly effective even if it's costing the company a fortune to compete that way.

For the third time: I know OP said $350 but look at it the other way: he apparently felt a 7870 perf was going to be good enough so that opens it up to the 7850 as well, and save $150 for a more worthwhile upgrade than the current crop of 28nm GPUs. So you can look at it both ways.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You happen to think game devs will suddenly place priority on PC gaming. I'm much less optimistic and think they will continue to focus on consoles and slap a few extra effects on the PC version and call it a day, no matter what their PR people claim about DX11.

It's not about priority for PC games. They don't have to make Crysis style leaps every month. But it's already happening, see all the Dirt games, Crysis, Batman, etc. GTX670 vs. 7850 today. Dirt Showdown as I predicted earlier this month will continue to show Kepler in better form than GCN.

4x MSAA tests that they occasionally do too. It's not as big of a problem as you are making it out to be; it is not the main reason why 680 wins in BF3.

It may not be the main reason but it's for sure one of the reasons. NV cards handle 4x MSAA in deferred game engines better.

Here is GTX580 losing to HD7870 OC with FXAA:

44673.png


and now GTX580 is winning when you turn on 4xMSAA.

44672.png


FXAA - GTX580 is 13% faster than HD7870
MSAA - GTX580 is 19.3% fater than HD7870

Nothing changed, just 4xMSAA. AMD's GCN architecture takes an extra 6.3% performance hit compared to Fermi, nevermind Kepler.

Tess still not a real factor. I wouldn't pay much extra for it.

Look at Batman AC, Crysis 2, Civ5. Are you seriously arguing that tessellation is NOT a factor?

CivV relies on compute shaders for geometry processing (i.e., tessellation). Cayman architecture is weak in this specific area. There is no fairy dust, it's deficiency in tessellation performance.
44632.png


How is HD7850 38% faster than HD6950?*
How is GTX570 faster than HD6970 by 40%?

Fairy dust?

Exact same thing that clobbers GTX200 series in Civ5 and why GTX460 can beat HD5870:

1920.png


The fact that you keep dismissing tessellation makes me seriously question if you even play modern PC games. It's a factor whether you want to believe it or not.

blacklighttessellation.jpg

br%201920.png


Tessellation is why Fermi will pummel Cayman and Cypress in DX11 games that use it and why HD7850 will suffer the exact same fate against the 670 (and it already does but you ignored the massive performance delta between 670 and 7850 in Batman AC, Crysis 2, Lost Planet 2, etc.).

If you don't care to use tessellation in games, that's another story. Ignoring it is not helping HD5870 users for example. It also does little to strengthen your viewpoint that GTX670 is not worth spending $ over HD7850 when its already smoking it by 50-60%.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
MSAA - GTX580 is 19.3% fater than HD7870

Tessellation is why Fermi will pummel Cayman and Cypress in DX11 and why HD7850 will suffer the exact same fate against the 670 (and it already does but you ignored the massive performance delta between 670 and 7850 in Batman AC, Crysis, Lost Planet 2, etc.).

If you don't care to use tessellation in games, that's another story. Ignoring it is not helping HD5870 users for example.

I prefer skinny GPUs, thanks.

You keep saying there is no point in upgrading from 69xx if you don't want better tessellation.

First of all, there are power/noise/heat and VRAM differences which is what makes it more worthwhile--that's why I got a 28nm GPU and 22nm CPU even though the last-gen stuff was quite good.

Secondly, yes, there isn't a compelling reason for people with 40nm GPUs to upgrade right now. It has little to do with tessellation and much to do with current price levels.

And yes, if tessellation is hurting your fps too much, turn it off. What is it really adding? It doesn't add anything most of the time. It's just a box to check so games can claim they used DX11. Tell me with a straight face that you could even tell if tessellation were on or off in-game in Metro 2033: http://www.overclock.net/t/690441/pcgh-metro-2033-direct-x11-comparison-screenshots

Look at how gamedevs have implemented tessellation so far in other games: http://www.xbitlabs.com/articles/graphics/display/hardware-tesselation_5.html

Instead of posting tons of graphs of benches (seriously, using Civ V at framerates over 60fps, as if Civ V were some shooter...?), how about finding before-and-after screenshots of when tessellation actually mattered in a popular game?

Do you honestly in your heart of hearts believe that we will see tessellation used in a game-changing way anytime before the next-gen consoles come out?

But fine, you are insistent that tessellation matters even if I can barely tell the difference. Even if we take your highly sensitive eyes to be standard, guess what? AMD can do tessellation just fine when its geometry logic is clocked high enough. I highly doubt tessellation is the limiting factor.

Let's look at AT's very own tessellation analysis, shall we? I'll use the GTX 670 review since that includes the top four GPUs: http://www.anandtech.com/show/5818/nvidia-geforce-gtx-670-review-feat-evga/16

If you want to see the 7850 it's here, but at stock clocks of course its tess power sucks. Clock it up to 1GHz and it will tessellate as well as a 7870. Even at its pitiful stock clock of 860MHz, though, the 7850 is still out-tessellating the GTX 580@stock.

46454.png


46455.png


Also see: http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/12 which says:

"Of course both of these benchmarks are synthetic and real world performance can (and will) differ, but it does prove that AMD’s improvements in tessellation efficiency really do matter. Even though the GTX 580 can push up to 8 triangles/clock, it looks like AMD can achieve similar-to-better tessellation performance in many situations with their Southern Islands geometry pipeline at only 2 triangles/clock.

Though with that said, we’re still waiting to see the “killer app” for tessellation in order to see just how much tessellation is actually necessary. Current games (even BF3) are DX10 games with tessellation added as an extra instead of being a fundamental part of the rendering pipeline. There are a wide range of games from BF3 to HAWX 2 using tessellation to greatly different degrees and none of them really answer the question of how much tessellation is actually necessary. Both AMD and NVIDIA have made tessellation performance a big part of their marketing pushes, so there’s a serious question over whether games will be able to utilize that much geometry performance, or if AMD and NVIDIA are in another synthetic numbers war."

Note that the 7870 had better tess performance than the 7950, at least at stock, due to higher clocks, yet LOST in Unigine anyway. What this means is that you can have INFINITE tessellation power, but if that's not the bottleneck, then that's not the bottleneck, and apparently in Unigine, tessellation is not the bottleneck. What IS the bottleneck, then? I don't know, but I've heard some people complain about AMD not increasing the number of ROPs.

If you believe AT's analysis was wrong and that AMD's GPUs are so horrible at tessellation that it is the bottleneck, what are you expecting, exactly? Games that will bring down a 7970 due to tessellation but leave 680 standing? It might happen eventually, but I think that future is too far to worry about and that by the time that happens, both cards would be obsolete. If you disagree, then let's agree to disagree about the timeframe.

By the way, here are oc vs oc results for the top four GPUs today. Note that BF3 and Deus Ex have tessellation. Note also how the 7970 performs vs the 680 (basically a tie in BF3 and a win in Deus Ex): http://hardocp.com/article/2012/05/14/geforce_680_670_vs_radeon_7970_7950_gaming_perf/3

BTW, John Carmack's opinion of tessellation (note that he thinks even Nvidia's tessellation ability isn't quite there yet): "Tessellation is one of those things that bolting it on after the fact is not going to do anything for anybody, really. It’s a feature that you go up and look at, specifically to look at the feature you saw on the bullet point rather than something that impacts the game experience. But if you take it into account from your very early design, and this means how you create the models, how you process the data, how you decimate to your final distribution form, and where you filter things, all of these very early decisions (which we definitely did not on this generation) I think tessellation has some value now. I think it’s interesting that there is a no-man’s land, and we are right now in polygon density levels at a no-man’s land for tessellation because tessellation is at it’s best when doing a RenderMan like thing going down to micro-polygon levels. Current generation graphics hardware really kind of falls apart at the tiny levels because everything is built around dealing with quads of texels so you can get derivatives for you texture mapping on there. You always deal with four pixels, and it gets worse when you turn on multi-sample anti-aliasing (AA) where in many cases if you do tessellate down to micro-polygon sizes, the fragment processor may be operating at less than 10% of its peak efficiency. When people do tessellation right now, what it gets you is smoother things that approach curves. You can go ahead and have the curve of a skull, or the curve of a sphere. Tessellation will do a great job of that right now. It does not do a good job at the level of detail that we currently capture with normal maps. You know, the tiny little bumps in pores and dimples in pebbles. Tessellation is not very good at doing that right now because that is a pixel level, fragment level, amount of detail, and while you can crank them up (although current tessellation is kind of a pain to use because of the fixed buffer sizes on the input and output [hardware]) it is a significant amount of effort to set an engine up to do that down to an arbitrary level of detail. Current hardware is not really quite fast enough to do that down to the micro-polygon level."

http://www.pcper.com/reviews/Editor...-Graphics-Ray-Tracing-Voxels-and-more/Transcr

If I'm interpreting Carmack right, he's saying that the really impactful level of tessellation he desires is beyond even Kepler right now.

Anyway, we've been over the economics of game development in this forum ad infinitum so you know that developers feel pressured to code to the lowest reasonable common denominator: consoles. Until consoles can tessellate, don't hold your breath on tessellation being anything more than a tacked on feature that doesn't do much, as Carmack noted.

Better MSAA performance, better multi-monitor support, a-vsync, etc. at least impact games in real ways today. Hardware-accelerated physics and tessellation not so much, because consoles don't support them. Hardware is useless without software.

NV knows this, hence TWIMTBP and such, but even they can do only so much against the anvil that is DX9 consoles that is weighing down graphics progression in the games development industry. So far NV has paid some devs to use PhysX for stuff like fog or papers (why so many papers all over the place? To show off PhysX, not because it necessarily makes sense) and to over-tessellate flat objects in Crysis 2 and stuff like that. Big deal.

But like I said, feel free to post with/without screenshots of games that use tessellation. Hopefully at least one game out there will actually use it right and not just tack it on afterwards which doesn't do much. So far, though, I think all the games I'm aware of that use tessellation do it in the lame way that Carmack was talking about (bolted on).
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Whether or not tessellation is worth the huge performance penalty isn't even a part of this thread. You started going that direction. What's next? DX11 codepath itself often adds a huge performance penalty over DX9/10 even without tessellation. So what do you propose we play games in DX9 now?

If we are going down that slippery slope, soft shadows, 4xMSAA, Ultra vs. High visual settings in many games often don't add much value and yet have a huge performance hit. Next thing you know we might as well play on consoles.

Catwoman_Vine_TessON.jpg

Tess-off-normal-high.jpg

batman_tesselation.jpg


This is NV's internal guidance for Batman AC based on AA, PhysX and DX11/tessellation features:

GPU-Chart1.jpg


Basically you just presented a case that people who are getting GTX670/680/7970 are totally wasting their $ since GTX560Ti $200 is fast enough to play the game with features turned down.

I find it interesting that you often defend the price premium for HD7970 over GTX670 but you clearly aren't seeing how GTX670 is worth $150 over HD7850 despite offering 50-60% more performance.

When most people buy a GPU, they load up the game, let the game load optimized defaults and/or manually push all the settings to the max to save time. If you are going to make the case that AA, PhysX, DX11, tessellation, etc. are all not worth turning on since most of those things add marginal benefit, then most of us would be perfectly fine gaming on a $150 HD6870.

Here is an animated comparison:

Tessellation in Batman AC
BACTessellationComparison1.gif


Tessellation in Crysis 2
PNPeH.gif

Skknu.gif

crysis22011062516405805.jpg


Notice the walls are not flat:
vzNib.jpg
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You also said that worse performance with 4xMSAA is not that big of a deal.

91_MSAA_Off.jpg

Those jaggies on the buildings and the poles don't bother you?

93_MSAA_4x.jpg
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
RS, sorry for editing my post so extensively but I added extra info on tessellation. As you can see I don't think the tessellation engine is what is bottlenecking AMD cards in certain games, judging by the synthetic tessellation tests where even a stock-clocked 860MHz 7850 beats a 580GTX and isn't THAT far behind a 670GTX. Clock it up to 1050GHz and it will perform better.

And I only started talking so much about tessellation because you kept repeating over and over how Southern Islands architecture was way behind in tessellation, when a) that doesn't appear to be true, and b) imho it doesn't even add much so frankly I would turn it off myself, even if others would leave it on. Just imho though.

In summary, it is my view that Kepler architecture has all 3 facets that are necessary for next generation games covered:

1. Tessellation
2. FP16 textures
3. Deferred MSAA

That is what you wrote, among other things. I'll give you number 3, I don't know enough to comment about number 2 right now, and see my previous post regarding your number 1.

I don't think I said MSAA doesn't matter. If I did, sorry, but I am pretty darn sure I didn't. I did say that I didn't think the performance difference was a big deal and that 4x MSAA was not the main reason why (stock) 7970 loses to (stock) 680. Repeat: I did not say that MSAA was not a big deal; I was saying that I didn't think the %performance hit was as massive as you were making it out to be, relative to Kepler, and I wish the HWC site were up as they do that sort of MSAA on/off test all the time. I've actually defended MSAA against FXAA on this forum, as I prefer MSAA.

I find it interesting that you often defend the price premium for HD7970 over GTX670 but you clearly aren't seeing how GTX670 is worth $150 over HD7850 despite offering 50-60% more performance.

Whoa now, I think we all know that price/perf drops as you climb up the price ladder. OP apparently felt comfortable enough with a 7870 level of perf to even float the idea out there, which is why I suggested an OC'd 7850 instead since that will be reasonably close to a OC'd 7870. That's the way I looked at it. The way you and others looked at it was this: he has a $350 so why not spend $50 more and get a 670 that will easily beat a 7870? I think both ways are valid ways of looking at it. I have also repeated this line of reasoning at least three and probably FOUR times already in this thread! Do you seriously need me to repeat it again? Obviously if his budget is flexible enough to stretch to $400 then sure, go for the GTX 670, esp. if he really really wants to buy-and-hold. But I think the way I looked at the problem wasn't necessarily wrong. In fact, I encourage you to re-read the first post in this thread. If anything I think my response answered the call of the question far more directly than your guys' upselling him to a GTX 670. So I don't know why you're jumping down my throat over my recommendation to OP to get the 7850 and OC it. Seriously, re-read the first post of this thread. Just because he has the budget to buy a faster card doesn't mean it is necessarily the best use of funds. I thought you and I agreed that continually upgrading midrange was best, hence my suggestion.

Look, if you want to criticize SI architecture be my guest. I will be the first to say it's less efficient, at least for gaming, and maybe even for HPC (not sure yet, let's see GK110 first). But to rip SI for having worse tessellation when that does not appear to be the case, I don't know what to say except that you must be smarter than me, and Ryan. Maybe PM Ryan to tell him how SI has worse tessellation performance than Kepler and have him write an article about it.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
RS, sorry for editing my post so extensively but I added extra info on tessellation. As you can see I don't think the tessellation engine is what is bottlenecking AMD cards in certain games, judging by the synthetic tessellation tests where even a stock-clocked 860MHz 7850 beats a 580GTX and isn't THAT far behind a 670GTX. Clock it up to 1050GHz and it will perform better.

Why are you comparing HD7850 to 1st generation Fermi? Regardless, GTX580 keeps up with HD7950 so it would demolish a 7850 in pretty much all modern DX11 games with tessellation, barring Anno 2070.

And I only started talking so much about tessellation because you kept repeating over and over how Southern Islands architecture was way behind in tessellation, when a) that doesn't appear to be true, and b) imho it doesn't even add much so frankly I would turn it off myself, even if others would leave it on. Just imho though.

It is true. Southern Islands is a full generation behind Kepler in tessellation performance (not to mention GK104 is a mid-range chip in the Kepler family and it already has 2x the tessellation performance of HD7970). It's reflected in basically all modern games that use tessellation, except in situations where some other factor is the more limiting component for GTX670/680 series, such as memory bandwidth, pixel fill-rate or AA performance. Why do you think GTX670/680 are willing in more recent games outside of AvP, Metro 2033? It's not magic. Moving forward, GTX670 slaughter an HD7850 in most games that will use Tessellation and FP16 textures (so all the Dirt games, Crysis games, Batman games, etc.). And the more next gen games use tessellation, the more the gap will become.

You also said Tessellation is barely noticeable. Check out this thread with Crysis 2 screenshots with tessellation.

I still think you are missing the entire premise of this thread: HD7850 vs. 670. At this point you have deviated SO far from that original premise. Your entire view was that it wasn't wroth spending $150 over 7850 for the 670. I have shown countless benchmarks and tests why it's simply not true for someone who intends to keep the GPU for 3 years. I have also shown that in future games all things point to GTX670 destroying HD7850 in part because HD7850 lacks current generation FP16 and tessellation performance and takes a larger performance hit with 4xMSAA in deferred game engines such as Frostbyte 2.0.

Furthermore, on average out of the box GTX670 = HD7970. So really until AMD has a competitor for $400, or drops HD7870 far below $350, it's a moot point. If you think GTX670 is poor value, then present your view and let the OP decide.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I really do not like your attitude where you tell me I'm missing the point and that I'm wrong even when I show you evidence that tessellation is not the limiting factor, and cite to OP not even talking about GTX 670 at all. (And if you bothered to look at AT's benches you'd see that I was comparing it 7850 to GTX580 due to their chart layout; it is not listed in their most recent chart; but if you compare across both charts you can see that it isn't THAT far behind a GTX 670--and this is at stock 7850 speeds, which are pretty much universally acknowledged as being ridiculously low given the available headroom. Or at you going to start arguing THAT, too?)

http://www.anandtech.com/show/5625/...-7850-review-rounding-out-southern-islands/16

http://www.anandtech.com/show/5818/nvidia-geforce-gtx-670-review-feat-evga/16

Detail Tessellation Sample - Max (frames per second, higher is better)

7970: 2220
680: 2112
670: 2035
7870: 2010
7950: 1990
7850: 1831
580: 1793
570: 1432

Note that the two GPUs with the biggest OC headrooms are the 7970 and 7850. So oc vs. oc the 7850 can probably get awfully close to a 7870, and by extension, a 670. You made it sound like it was a cataclysmic gulf between the two of them or something.

I'm missing the point of this thread? Re-read OP's first post. Re-read the post I made prior to this one where I tacked on a response to your continual insistence that I am somehow wrong for suggesting OP buy a 7850 and overclock it (and then sell and re-buy, for crying out loud I thought we both agreed that continually upgrading midrange was best in the long run). Note that NOWHERE In OP's post did he mention GTX 670. He specifically asked for advice on whether to buy a 7850 or 7870. You guys saw the $350 and brought up the GTX 670, but come on, just because someone has a large budget doesn't mean they should spend every last penny of it, and then some ($50 more in this case). I recommended a more conservative approach. Video cards depreciate very quickly but it's not linear. The highest-end cards depreciate fastest most of the time, whereas the midrange parts have a somewhat less steep fall. Hence the continually-upgrade-midrange strategy that I *thought* you and I were on the same page on. OC vs OC the GTX 670 is not 60% faster than 7850, even though it costs 60% more. OP only later on added that he would keep it for 3 years but given how wishy washy he was in the first post I don't know how true that would be and still think the continual-midrange path is most effective in the long run... I also don't think a gtx 670 is going to futureproof him for *that* much longer than a HD7850 and that a case can be made for either one.

What seems to infuriate you is that 7850 gets a lot of hype from a few vocal people. But read what I wrote. It's hardly hype, just saying that it wins by default in its price bracket. Since when is saying a card wins by default "overhype"? You are attacking the wrong person, here. I don't think it's THAT good but I don't think it's a bad card for the price, either. The closest recent analogy to it is the GTX 460 1GB since both rely a lot on OC to release their potential and are priced at $220/250 at launch.

As for tessellation: okay I get it. You are obviously smarter than Ryan and me. You should write to Ryan demanding that he re-run his benchmarks until GTX 680 "smashes" 7970 in his tessellation benchmark. :eyeroll:

Why are you even bringing 7970s into the equation? Of course it is worse value, as are virtually all the highest-end GPUs for their time. Are you just saying that because I bought a 7970? Dude, you KNOW I bought it for non-gaming reasons in part. Also, I wanted the DP adapter, and the price I paid was okay. If you count the value of the DP cable as $20, then I paid something like $385 for my 7970. Sure I could have gotten a GTX 670 for even less (about $370), but frankly if I only needed a card for purely gaming purposes, I wouldn't get ANY of those crappy values (and by that I don't mean literally crappy, just disappointing given the pricing we've seen for previous recent launches). I'd probably just wait till the GTX 660 to see if that would be good enough or at least drop prices on the cards that were similarly priced to it.

Why are you comparing HD7850 to 1st generation Fermi? Regardless, GTX580 keeps up with HD7950 so it would demolish a 7850 in pretty much all modern DX11 games with tessellation, barring Anno 2070.

It is 100% true. You are not seeing it. I can't help you. Southern Islands is a full generation behind Kepler in tessellation performance. It's reflected in basically all modern games that use tessellation, except in situations where some other factor is the more limiting component for GTX670/680 series, such as memory bandwidth, pixel fill-rate or AA performance. Moving forward, GTX670/680 will beat HD7950/7970 in 95% of all games that will use Tessellation. And the more they use tessellation, the more the gap will become.

You also said Tessellation is barely noticeable. Check out this thread with Crysis 2 screenshots with tessellation.

I still think you are missing the entire premise of this thread: HD7850 vs. 670. At this point you have deviated SO far from that original premise. Your entire view was that it wasn't wroth spending $150 over 7850 for the 670. I have shown countless benchmarks and tests why it's simply not true. I have also shown that in future games all things point to GTX670 destroying HD7850 in part because HD7850 lacks proper FP16 and tessellation performance. GTX670 is the better card in every way and it's worth every penny over the 7850 for today's and tomorrow's games since they will use more advanced textures and even more tessellation.

I am not interested to discuss tessellation performance of HD7870, HD7950 or HD7970 at this point. It's simply NOWHERE near GTX670/680 levels. If you don't want to accept that, I'll just move on. Regardless, none of those cards have anything to do with OP's choices.

Furthermore, in almost all modern DX11 games, GTX670 beats 7950. So really until AMD has a competitor for $400, it's a moot point.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You guys saw the $350 and brought up the GTX 670, but come on, just because someone has a large budget doesn't mean they should spend every last penny of it, and then some ($50 more in this case).

So you think it's not worth spending $50 from HD7870 to step up to a GTX670 and yet you bought an HD7970 over the 670 despite HD7970 performing 5-10% faster in games? You realize that GTX670 is 25-30% faster than HD7870 for just $50. :D

Your conservative approach didn't consider that OP stated he intended to keep the machine for 3 years without upgrading.

Also, like I said you are way over-estimating the performance of the HD7850. It seems to be a trend on our forum.

Stop looking at 60% more performance vs. 60% higher price for a second and consider what you are recommending.

BF3 1080P
7850 = 36.2 fps (completely unplayable)
GTX670 = 61 fps (very good)

Crysis 2 1080P
7850 = 28.6 fps (completely unplayable)
GTX670 = 48 fps (not great but 20 fps faster)

Dirt 3 1080P
7850 = 50.9 fps (racing games are much better with 60 fps avg)
GTX670 = 89 fps (perfect)

TrackMania 1080P
7850 = 50 fps (again too slow for a racing game)
GTX670 = 96 fps (nearly 2x faster)

Batman AC 1080P
7850 = 31.9 fps (console level!! can't use PhysX)
GTX670 = 53 fps

Dragon Age 2 1080P
7850 = 25.4 (completely unplayable)
GTX670 = 48.3

Hard Reset 1080P
7850 = 51.6 (not good enough for a fast paced FPS)
670 = 88.4

You also said you play Source games.
46434.png


etc. etc.

It's not just $150 extra for 50-60% more performance. In this specific instance, it's night and day when we start looking at actual frame rates. HD7850 is just 1% faster than HD6950 and needs a 30% overclock to reach GTX580. It is good for the $, but nowhere near GTX670.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
RS, enough. Seriously. ENOUGH. Re-read OP's first post. Then read my posts. Tell me where in this thread I told him to get a 7850 over a 670. You can't. Why? Because I only commented on the "overhyped" comment on the 7850. In his case of buy-and-hold, which he only talked about later on, he's probably better off with the GTX 670 if he had to get something right now. And in any case I was commenting about the "overhyped" comment, anyway. I only noted the price/perf GTX670 comparison regarding overhype; I wasn't saying he should surely go for the 7850 instead.

This is like the third time I've told you now: I got the 7970 not for games, though faster performance in games is a nice side effect. At this point I am this close to permanently ignoring you for trolling me. I've had to repeat the same things like 3-4 times to you on various topics ranging from tessellation to why I bought what I bought to correcting you on what OP's first post actually said when you so condescendingly told me that the premise of this thread is 7850 vs 670. It turns out that he was more flexible than he said about budget, but in any case, show me where I recommended a 7850 over a 670 for OP in particular. Because I didn't.

Both of us write freakin' novel-length posts but I don't think you are even reading mine, and it's getting annoying. You haven't even commented at all about Ryan's tess benchmarks!
 
Last edited:

Zanovar

Diamond Member
Jan 21, 2011
3,446
232
106
Wow, if there ever was 2 posters who could of made a couple of points in a couple of lines.sheesh.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Wow, if there ever was 2 posters who could of made a couple of points in a couple of lines.sheesh.

This is what happens with you have two fast typists who edit their posts a lot so that they end up missing what each other wrote. However, I repeated myself so many times for so many topics that at this point I am starting to think he's messing with me; he hasn't even freaking commented on Anandtech's own tessellation benchmarks for instance. I am going to bed. I've had enough of this nonsense.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Both of us write freakin' novel-length posts but I don't think you are even reading mine, and it's getting annoying. You haven't even commented at all about Ryan's tess benchmarks!

I read your posts. Price/performance for 7850 is a good metric against cards such as 560ti 448/570 but in practice for keeping a GPU for 3 years, it starts to become irrelevant. In many games today an HD7850 is too slow with max settings and specifically it is vastly slower than a 670 when looking at actual frame rates.

Also, your discussion on whether tessellation adds any real value to games is debatable. I addressed that in my post with pictures. Further, Ryan's theoretical benchmarks are just that - 1 sample size and contradict all the other reviews on the Internet. I provided at least 4-5 reviews and benchmarks in real world games where tessellation was used and where GTX670 consistently not only beats 7850 but even 7970. Anyway, let's move on. OP bought the 670.

If you want, we can revisit this discussion later to see who was right:

My view is that GTX670 will continue to beat an HD7850 by 50-60% in next generation games/sequels, including future Dirt, Batman, Crysis, BF3 expansions, etc.