5870 staying power

lavaheadache

Diamond Member
Jan 28, 2005
6,893
14
81
been couch bound for a couple days so I got around to toying with my htpc.

It's got the following specs.

i3
vertex 3
5870
win 7

I installed Far Cry 3 for kicks fully expecting it to choke. Nope. The game is completely playable @ 1080p max quality settings no AA. Frame rates are mostly in the mid 30's and 40's

Certainly one could argue that those frame rates aren't ideal but then again that is subjective.

If I were to seriously play on the machine I would probably drop the settings a tick or 2 but damn.... a circa 2009 card playing a 2012 top end game with minimal troubles.
 

Zanovar

Diamond Member
Jan 21, 2011
3,446
232
106
im sure you could drop a few more settings and hold 60 fps(im geussing here)and that 5870 would be playing fine.im all over the place with this game:S,from 0 ssao to false postfxquality,i just cant get this game running to my likeing:S.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
Really?

High-1920.png



1920_01.png


Both benchmarks use 4x AA, true, but still. The game chops 7870/660 down to 20 or 30 FPS, and the 5870 is worse than those cards.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
The Cypress architecture was a good architecture. Its biggest weakness in modern games is its smaller tessellation engine, but that can easily be scaled in drivers.
 

lavaheadache

Diamond Member
Jan 28, 2005
6,893
14
81
Really?


Both benchmarks use 4x AA, true, but still. The game chops 7870/660 down to 20 or 30 FPS, and the 5870 is worse than those cards.

4xAA is a tough cookie to swallow esp. in this game.

I'm sure if I tried 4x AA it would be another matter entirely
 

lavaheadache

Diamond Member
Jan 28, 2005
6,893
14
81
I'm going pop one of my eyefinity 6 2 gb cards in there and see if there is a notable difference between the 1 gb card and a 2gb one.
 
Feb 19, 2009
10,457
10
76
MSAA cuts perf hugely in these recent games. I was running BF3 with no MSAA on high/ultra on my OC 5850, getting constant 60 fps in MP, was good. But with MSAA on, it gets destroyed to around 35 fps.

You dont need the latest hardware to enjoy latest games, a few fx settings which dont improve visuals much often means getting great fps or a slideshow.
 

Zanovar

Diamond Member
Jan 21, 2011
3,446
232
106
MSAA cuts perf hugely in these recent games. I was running BF3 with no MSAA on high/ultra on my OC 5850, getting constant 60 fps in MP, was good. But with MSAA on, it gets destroyed to around 35 fps.

You dont need the latest hardware to enjoy latest games, a few fx settings which dont improve visuals much often means getting great fps or a slideshow.

But sir!,the purists demand the way it was intended*laughs*
 
Last edited:

ShadowOfMyself

Diamond Member
Jun 22, 2006
4,227
2
0
MSAA tanks cards compared to no aa. irrelevant charts

Yup, learned this early on

I dont really understand whats going on, since AA used to have such a small performance hit not long ago, but recently its like we are back to Geforce 4 AA performance

This goes for most game benchmarks nowadays though... Of course, the point is to compare different cards at equal settings so they serve their purpose, but one should never take reviews as an example of how well a card will run a certain game

Generally just lowering shadows and turning off AO makes everything playable on any midrange card, with almost no visual difference (certainly not noticeable while running around shooting at stuff)
 

blckgrffn

Diamond Member
May 1, 2003
9,686
4,345
136
www.teamjuchems.com
My 3870 taught me to enjoy games with no AA, and I expect my 5870 to last a long time because of it ;)

I wish I would have spent the $30 extra bucks (at the time) on a 2GB card, then I'd feel even better about it. My 5870 is two years old January and well worth the $200 IMHO.

Thank for posting about FC3. I am excited to try it when it goes on a big Steam sale and its good to know I should be able to enjoy the eye candy :)
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
The game is completely playable @ 1080p max quality settings no AA. Frame rates are mostly in the mid 30's and 40's
mn.... a circa 2009 card playing a 2012 top end game with minimal troubles.

Yup, that sounds about right. Very impressive for a 2009 GPU. Only 3 fps slower than GTX580 with no AA!

fc3%201920%20ss.png


I also think 2 things are making HD5870 look better than normally would have been the case:

1) Most games are still made with consoles in mind. Even FC3, while pretty, is not really a next generation game in terms of graphics (just my opinion). That means even if you crank everything to the max on GTX680/7970, FC3 won't look much better than it does on HD5870, as most games look very similar now between High an Ultra. That makes it a big harder to see the value of newer $500 cards.

2) I don't view GTX500/HD6900 series as a real new generation in performance. They are more of a refresh in my eyes. That means since Sept 2009 when HD5870 launched, we have really only gone through 1 major new generation (GTX600/HD7900), but it's been about 3 years and 3 months. That means the pace of innovation/performance improvements has slowed.

Just to give you a point of reference, we went from 9800GTX in just 2.5 years:
March 31, 2008 = 9800GTX launched
November 9, 2010 = GTX580 launched

GTS250 is actually slightly faster than 9800GTX but you can see GTX580 is 143% faster than GTS250 is with AA/AF:
http://www.computerbase.de/artikel/grafikkarten/2011/bericht-grafikkarten-evolution/3/

HD7970Ghz is about 80% faster than 5870 is with AA/AF:
http://www.computerbase.de/artikel/grafikkarten/2012/test-amd-radeon-hd-7970-ghz-edition/4/

So we have 143% increase from NV in 2.5 years vs. 80% increase in 3.25 years when comparing HD5870 to where we are today. Games aren't looking much better than they did compared to Metro 2033/Crysis/Warhead and GPU speed has increased at a slower pace. The end result is HD5870 still looks great.
 
Last edited:

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
I really don't like the term "playable", especially how people throw it around as if its a good thing when its really not.

Its like how all the light beer commercials essentially describe the product as "drinkable", or how someone might describe a relatively unattractive person as "doable".

As much as I love the 5800 series and even the 5850 I still have and admire in a backup rig of mine (was easily one of the best PC related purchases I've made in the past decade, especially given the cooling mod I was able to pull off on it that allows me to easily hit 1+GHz), I'm not going to delude myself into thinking it can do things it really can't. Whenever I'm playing on my 5850 rig, the settings are going down.
 

Zanovar

Diamond Member
Jan 21, 2011
3,446
232
106
I really don't like the term "playable", especially how people throw it around as if its a good thing when its really not.

Its like how all the light beer commercials essentially describe the product as "drinkable", or how someone might describe a relatively unattractive person as "doable".

As much as I love the 5800 series and even the 5850 I still have and admire in a backup rig of mine (was easily one of the best PC related purchases I've made in the past decade, especially given the cooling mod I was able to pull off on it that allows me to easily hit 1+GHz), I'm not going to delude myself into thinking it can do things it really can't. Whenever I'm playing on my 5850 rig, the settings are going down.

So whats playable to you?wether its good or not *grinz*
 

lavaheadache

Diamond Member
Jan 28, 2005
6,893
14
81
Yup, that sounds about right. Very impressive for a 2009 GPU. Only 3 fps slower than GTX580 with no AA!

I also think 2 things are making HD5870 look better than normally would have been the case:

1) Most games are still made with consoles in mind. Even FC3, while pretty, is not really a next generation game in terms of graphics (just my opinion). That means even if you crank everything to the max on GTX680/7970, FC3 won't look much better than it does on HD5870, as most games look very similar now between High an Ultra. That makes it a big harder to see the value of newer $500 cards.

2) I don't view GTX500/HD6900 series as a real new generation in performance. They are more of a refresh in my eyes. That means since Sept 2009 when HD5870 launched, we have really only gone through 1 major new generation (GTX600/HD7900), but it's been about 3 years and 3 months. That means the pace of innovation/performance improvements has slowed.

Just to give you a point of reference, we went from 9800GTX in just 2.5 years:
March 31, 2008 = 9800GTX launched
November 9, 2010 = GTX580 launched

GTS250 is actually slightly faster than 9800GTX but you can see GTX580 is 143% faster than GTS250 is with AA/AF:
http://www.computerbase.de/artikel/grafikkarten/2011/bericht-grafikkarten-evolution/3/

HD7970Ghz is about 80% faster than 5870 is with AA/AF:
http://www.computerbase.de/artikel/grafikkarten/2012/test-amd-radeon-hd-7970-ghz-edition/4/

So we have 143% increase from NV in 2.5 years vs. 80% increase in 3.25 years when comparing HD5870 to where we are today. Games aren't looking much better than they did compared to Metro 2033/Crysis/Warhead and GPU speed has increased at a slower pace. The end result is HD5870 still looks great.


I think your reasoning has to be fine tuned a little bit. I see that you mention 9800 gtx and gts 250. I think if you want to truly be accurate I think referring to 8800 gtx (g80) or 8800 gts 512 (g92) instead of gts 250 would be better. Gts 250 is after all based on g92 which is basically a die shrunk and refined g80. See how fast that 2.5 year time just got longer. :)
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,386
32
91
I really don't like the term "playable", especially how people throw it around as if its a good thing when its really not.

Its like how all the light beer commercials essentially describe the product as "drinkable", or how someone might describe a relatively unattractive person as "doable".

Says a person using LCD's. LOL.
 

BoFox

Senior member
May 10, 2008
689
0
0
Yup, that sounds about right. Very impressive for a 2009 GPU. Only 3 fps slower than GTX580 with no AA!

fc3%201920%20ss.png


I also think 2 things are making HD5870 look better than normally would have been the case:

1) Most games are still made with consoles in mind. Even FC3, while pretty, is not really a next generation game in terms of graphics (just my opinion). That means even if you crank everything to the max on GTX680/7970, FC3 won't look much better than it does on HD5870, as most games look very similar now between High an Ultra. That makes it a big harder to see the value of newer $500 cards.

2) I don't view GTX500/HD6900 series as a real new generation in performance. They are more of a refresh in my eyes. That means since Sept 2009 when HD5870 launched, we have really only gone through 1 major new generation (GTX600/HD7900), but it's been about 3 years and 3 months. That means the pace of innovation/performance improvements has slowed.

Just to give you a point of reference, we went from 9800GTX in just 2.5 years:
March 31, 2008 = 9800GTX launched
November 9, 2010 = GTX580 launched

GTS250 is actually slightly faster than 9800GTX but you can see GTX580 is 143% faster than GTS250 is with AA/AF:
http://www.computerbase.de/artikel/grafikkarten/2011/bericht-grafikkarten-evolution/3/

HD7970Ghz is about 80% faster than 5870 is with AA/AF:
http://www.computerbase.de/artikel/grafikkarten/2012/test-amd-radeon-hd-7970-ghz-edition/4/

So we have 143% increase from NV in 2.5 years vs. 80% increase in 3.25 years when comparing HD5870 to where we are today. Games aren't looking much better than they did compared to Metro 2033/Crysis/Warhead and GPU speed has increased at a slower pace. The end result is HD5870 still looks great.
Good read! :thumbsup:

HD 5870 was awesome, yeah for a 3.25-year old card considering that there's not yet a single GPU card that's 2x as fast OVERALL.

But I did wish that AMD also put that 512-bit bus on the 5870 like they did with the 2900XT from a few years earlier. In that link
http://www.computerbase.de/artikel/grafikkarten/2011/bericht-grafikkarten-evolution/3/
(although it tests only 7 games)
it shows HD 5870 to beat HD 4890 by ONLY 39-44%, despite the fact that HD 5870 had pretty much 2x the GPU muscle (apart from the memory clock/bus), with identical core clock, 2x TMUs, 2x shaders, 2x ROPs, plus roughly 2% improvement from Evergreen architecture optimizations. Even during the article's publication later on in 2011 with newer, more shader-heavy games (although still DX9 only for comparison purposes), they still did not make the 5870's 1600sp appear to shine much at all compared against the 4890's 800sp part. If the 5870 did sport a 512-bit bus, it would easily have been a full 15% faster than what it were. GTX 480, being only 15% faster overall, would have been a complete LOSS due to being insanely inefficient - perhaps NV would have fully delayed GF100 until GF110 (GTX 580) was ready. Why, oh why, didn't AMD do it aggressively enough? They were extremely aggressive with the X1900XTX and the HD 2900XT. I guess they just couldn't cope with the defeat very well, when Nvidia was even more aggressive with their godly 8800GTX and 8800 Ultra. The backpedaling to multiple, tiny GPUs (HD 3870X2) proved to be unsuccessful. They wanted to do a quad-3870 card, but it was never quite so feasible with the drivers not always having ideal scaling in games, plus insane power consumption. The 4870 should have been done in place of 3870 (about 7 months earlier), in order to completely tip the scales over before Nvidia even got to their G92 architecture. It would have even forced NV to go ahead with their G90 architecture (I think I know what the specs are) rather than taking the luxury of moving on to the slightly delayed GT200 arch.

Sometimes, in a fight, aggressiveness is the main ingredient.

Losing with the HD 2900XT did get them down, but they should have kept at it using the same fierce strategy. They were working on a 640sp R700 (rather than 800sp part) for 4870 months earlier right when the 3870 was launched, but they should have unleashed it instead of waiting all the way until summer 2008. Then they could do yet bigger version of it, perhaps with more than 800sp and even 32 ROPs to give the over-sized GTX 280 a really, really good fight until Nvidia was finally ready with their late 55nm GT200b.

True, the 5870 was finally their 'win'. It was great, alright. Nvidia's continued aggressiveness caused them to stumble early, while AMD kicked them in the sweet ribs with the 5970.

The 5970 was what they originally wanted to do with their 3870X2. Sheer dominance with grace and ease in the majority of popular modern games.

However, I would still have liked to see them do a 512-bit version of HD 5870, even if it was not until after GTX 480 came out, just to have a single GPU card keep Nvidia down and humble.

Then AMD would have been able to store and recycle that dominant energy over into Cayman, by also designing them to be at least 512-bit cards. A 512-bit 6970 would have been rather neck-to-neck with GTX 580.

Also, it would have been nice to see a souped-up version of Tahiti sporting 48 ROPs without these limited crossbar access shortcuts to the 384-bit memory, for more efficient bandwidth utilization (and greater theoretical peak pixel fill-rate). Ever seen a 7870 actually beat a 7950 in a few games, despite the 7950 having greater Gflops capability and 384-bit bus? Hopefully, 48 ROPs is what AMD will be doing with their Sea Islands HD 8870. It's just strange seeing the 960 VLIW5 sp HD 6850 having the same 32 ROPs as the much bigger, vastly more muscular 7970. AMD actually said that Barts XT with 32 ROPs and 1120 sp was 2% faster overall than the prototype with 16 ROPs but with 1280 sp (with the 16 additional ROPs making it yet 2% faster than adding 160 shaders). That was one of the most insightful things an engineer has ever shared, apart from refusing to ever explain WHY a VLIW5 Barts performed so well against the older Evergreen in virtually all games, like as if it were actually behaving like a VLIW4-optimized architecture like its bigger Cayman brother, SPECS-wise. The engineers just refused to explain the front-end magic in that one.

Here's to the future! HD 8870, probably beating Big Kepler to the clock, finally giving us more than 2x the single-GPU performance of 5870! I don't expect Sea Islands to be faster than the BigK, but I will never know for sure until..!! What if there's just some sick magic in it that allows a 410mm^2 GPU to beat a 550mm^2 goliath during this difficult 28nm round?!?

Heck, AMD could have easily done it with a beefed-up RV790XT"X" that had 960sp and 32 ROPs, slinging a rock in between the eyes of the 576mm^2 GTX 280 goliath, the biggest GPU ever made! Why, oh why ONLY a re-designed RV770 into a RV790 with millions more trannies, but nothing except for 100Mhz increase?!? Sometimes, all a knockout takes is a 100% fully-powered punch, not one done with 90% of the energy.
 
Last edited:

lamedude

Golden Member
Jan 14, 2011
1,230
68
91
I dont really understand whats going on, since AA used to have such a small performance hit not long ago, but recently its like we are back to Geforce 4 AA performance.
IIRC in deferred rendered games the geometry has to be stored in a separate G-buffer to do AA (it usually get thrown away which is why driver AA doesn't work). I guess moving that extra data around slows thing down. AMD's forward+ rendering provides the best of both worlds so hopefully developers ditch deferred rendering when they move onto nex gen consoles.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
Yup, that sounds about right. Very impressive for a 2009 GPU. Only 3 fps slower than GTX580 with no AA!

I also think 2 things are making HD5870 look better than normally would have been the case:

1) Most games are still made with consoles in mind. Even FC3, while pretty, is not really a next generation game in terms of graphics (just my opinion). That means even if you crank everything to the max on GTX680/7970, FC3 won't look much better than it does on HD5870, as most games look very similar now between High an Ultra. That makes it a big harder to see the value of newer $500 cards.

2) I don't view GTX500/HD6900 series as a real new generation in performance. They are more of a refresh in my eyes. That means since Sept 2009 when HD5870 launched, we have really only gone through 1 major new generation (GTX600/HD7900), but it's been about 3 years and 3 months. That means the pace of innovation/performance improvements has slowed.

1. Fair enough. But by its very nature as an open-world game, Far Cry 3 can still push GPUs fairly hard, extending draw distances. So you've got more geometry, textures, and lighting being drawn, you've got higher resolution base textures, and you've got several MSAA passes on all of that. In the end you're left with visuals that are much above and beyond anything a console is capable of.

You can look at the effects implemented on AMD's game blog, and can get a frame-for-frame comparison between the PC and console versions on Eurogamer's DigitalFoundry column. A couple things stand out to me: the game actually has a native implementation of transparency supersampling AA. This isn't full-blown supersampling as implemented in games like Sleeping Dogs or The Witcher 2; rather, it selectively applies a super-sample filter to 2d "transparent" textures, such as grass, leaves, or chain link fences. This option has been available for a while in AMD and Nvidia's graphics control panels for override, but this is the first time I've seen it natively implemented in a game. Also, for such a demanding DX11 game, it's interesting that there is no implementation (or at least no mention) of tessellation. Even with the tessellation improvements Cayman and GCN brought about, is AMD still not confident enough with tessellation to push it in their Gaming Evolved titles (barring the occasional character-smoothing implementations like Deus Ex Human Revolution and the more recent Hitman Absolution; we've yet to see a GE game implement environmental tessellation to the extent that TWIMTBP games like Crysis 2 or Batman: Arkham City do)

2. Hmm...the move from VLIW5 from VLIW4 in Cayman, along with the move to a dual graphics engine front end, felt more like a generational leap than a refresh. But you're right, the performance numbers really don't back it up as being any more than a refresh (unless you're looking at the Arkham City Extreme benchmark. o_O Evergreen sucks at Arkham City, for whatever reason). I remember hearing that AMD wanted to make the 6000 series on 28 nm but couldn't. This is what kept AMD from truly replacing Juniper (5700 series) with Barts (6800 series), because they couldn't sell Barts on 40 nm as low as they would have on 28 nm. I'm guessing that this also kept Cayman from achieving the clock speeds they were hoping for, with only a measly 30 MHs advantage over Cypress. Had Cayman been on 28 NM, I'm sure we would have seen much more of an advantage over Cypress than we ended up with. But then, the fault for this really falls to TSMC, not AMD. And I'm sure Nvidia was hit with the same issues by the delay in going to 28 nm.

Also, it would have been nice to see a souped-up version of Tahiti sporting 48 ROPs without these limited crossbar access shortcuts to the 384-bit memory, for more efficient bandwidth utilization (and greater theoretical peak pixel fill-rate). Ever seen a 7870 actually beat a 7950 in a few games, despite the 7950 having greater Gflops capability and 384-bit bus? Hopefully, 48 ROPs is what AMD will be doing with their Sea Islands HD 8870. It's just strange seeing the 960 VLIW5 sp HD 6850 having the same 32 ROPs as the much bigger, vastly more muscular 7970. AMD actually said that Barts XT with 32 ROPs and 1120 sp was 2% faster overall than the prototype with 16 ROPs but with 1280 sp (with the 16 additional ROPs making it yet 2% faster than adding 160 shaders). That was one of the most insightful things an engineer has ever shared, apart from refusing to ever explain WHY a VLIW5 Barts performed so well against the older Evergreen in virtually all games, like as if it were actually behaving like a VLIW4-optimized architecture like its bigger Cayman brother, SPECS-wise. The engineers just refused to explain the front-end magic in that one.

That actually hits on an interesting quirk with Tahiti. It has the same amount of ROPs, and also front end graphics engines, as both its flagship predecessor Cayman and its "little brother" Pitcairn. It isn't really a step forward in those areas, and indeed in certain situations the stock 7950 will lose to the higher-clocked 7870, since the graphics engine performance (which includes tessellation) is directly correlated to clock speed and the 7870 runs at a higher clock. It's all the more interesting since back when they redesigned the graphics engine into dual parts, a specific plus for the design was that it would be as scalable as AMD wanted going forward. Ryan Smith said this in Anandtech's 6970/6950 review: "Furthermore AMD believes they have an edge on NVIDIA when it comes to design - AMD can scale the number of [g]raphics engines at will, whereas NVIDIA has to work within the logical confines of their GPC/SM/SP ratios. This tidbit would seem to be particularly important for future products, when AMD looks to scale beyond 2 graphics engines." So AMD definitely could have put more graphics engines into Tahiti, but why didn't they? I guess the GCN architecture was already taking up so much die space with the compute-focused bits and causing so much heat that AMD couldn't afford to fit another graphics engine on there.

Oh well. I'm with you in hoping that AMD beefs up the 8970, though I don't really think a 512 bit memory bus width is necessary.
 
Last edited:

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
So whats playable to you?wether its good or not *grinz*

I prefer to keep my minimum frame rate above 60fps with an average above 90. I've been playing quite a bit of Planetside 2 recently and it will dip into the 30s and 40s due to sheer number of players and I almost can't stand it, but luckily that's a minimum frame rate scenario and nowhere near the averages I get which is closer to 60-70. Yes, its "playable" at those most taxing moments, but its nowhere near as enjoyable when its 80+fps in ares of sparser population/action. Unfortunately its simply something that has to be lived with as turning all the settings down hardly does anything, the game is simply too unoptimized.

Says a person using LCD's. LOL.

Damn, I try to keep up on hardware but I didn't realize there was a relevant technology currently available as a viable alternative to LCD.

Surely you can't be talking of CRT, a dead technology. All my CRTs have degraded into junk and would be too costly to bother with repairing and maintaining, and certainly not worth playing the crapshoot of buying used/refurbed models. That also doesn't address the fact that they're still terrible for any basic text work (ie just about all modern web browsing) and are a huge PITA to deal with due to their sheer size and weight (especially the good ones), something that also is a major detraction as I'm often moving around and want to take my monitors with me, for while my 120Hz monitors might be lowly LCDs, they're far better and faster than the average LCD to where they're actually tolerable.

Sure, the moment there is a viable technology (OLED?) that can save us from LCD and can be had in a 24-30" size for around $1000, I'll be all over it. Maybe you know something I don't? Is there a plasma TV out there with 0 input latency that can do more than 60Hz over HDMI without a hack (or even with one, as long as its confirmed to work...)?
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,386
32
91
Surely you can't be talking of CRT, a dead technology. All my CRTs have degraded into junk and would be too costly to bother with repairing and maintaining, and certainly not worth playing the crapshoot of buying used/refurbed models. That also doesn't address the fact that they're still terrible for any basic text work (ie just about all modern web browsing) and are a huge PITA to deal with due to their sheer size and weight (especially the good ones), something that also is a major detraction as I'm often moving around and want to take my monitors with me, for while my 120Hz monitors might be lowly LCDs, they're far better and faster than the average LCD to where they're actually tolerable.

Hey, make all the excuses you want that your LCD gives you "playable" black levels and color accuracy. I really don't like the term "playable", especially how people throw it around as if its a good thing when its really not. It's like how all the light beer commercials essentially describe the product as "drinkable", or how someone might describe a relatively unattractive person as "doable".
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
Hey, make all the excuses you want that your LCD gives you "playable" black levels and color accuracy. I really don't like the term "playable", especially how people throw it around as if its a good thing when its really not. It's like how all the light beer commercials essentially describe the product as "drinkable", or how someone might describe a relatively unattractive person as "doable".

Oh, I thought you were talking about LCD in terms of motion clarity, which is the only area where your comment could have possibly had some true relevancy to this topic

But I guess you're just a casual gamer who spouts off about image quality without a care about motion clarity, likely because you're incapable of perceiving and appreciating it.
 
Last edited: