1440P and 1600P for all intents and purposes are very close. There is only an 11% difference between these 2 resolutions. The difference between 1440P and 1080P is 78%. Talking about "middle-ground" between 1080P and 1600P makes no sense in this case because 1080P is nowhere near 1440P. For all intends and purposes 1600P benchmarks are sufficient and you failed to talk about them.
I've been talking about them the entire time, 7950 non boost is about 8% faster than the stock 580 at 1600, closer to 5% I believe at 1080p, I just went in between there.
You ignored all the other benches I linked too, how convenient.
They came from the same questionable source, you found it strange I don't think a review which claims as 925MHz 7950 is 1% faster than a 800MHz 7950 but at the same time want me to use it to discredit my performance stance, but also use it to validate 7950 OC performance?
Give me a break.
That's the whole point of upgrading - to use higher quality settings too. Using your logic, I might as well be gaming on a GTX285 since I can turn down everything and game at 1280x1024. Games where 1.28GB of VRAM is not enough is the reason people upgrade to 2-3GB cards.
I agree, I'm just offering the opinion that the upgrade will be quite minor since the difference in IQ will be minor. A lot of people still run old cards, you find this surprising? Believe it or not you're in the minority here.
Which games display a problem with 1.28GB of ram? Bioshock? I think we already addressed that, do you have anymore or are you saying upgrading for a texture resolution setting in BioShock is worth it over waiting for 50% or greater performance over the 7950 for the same price and better perf/w on 20nm?
Nope, your math doesn't add up. It's going to be 50%.
That's fine, 40, 50, 60, I don't really care it doesn't change my perspective.
Just an FYI, I played BioShock 3 on a 7950, 7950 CF, 9800 GT, and HD4600. The 7950 CF didn't make the game better, it didn't affect the story, it didn't change the dynamics of the game, and it didn't change the gameplay.
GTX560Ti 448 can barely reach GTX580 speeds and that's in old games where 1.28GB of VRAM is not a bottleneck. GTX760 OC is 50% faster than a GTX580.
50
7950 OC would get around 89 FPS there, now drop the garbage 4xAA and inject SMAA into it, overclock the 560 Ti 448 past 580 performance and the 448 will give similar performance with SMAA as the 7950 OC provides with 4xAA. The avg 448 OC is past 900MHz, it's easily faster than the 580 at that point.
A stock 760 is 36% faster than GTX580 in BI. It's impossible for 760 OC to be only 40% faster than 560Ti 448 OC.
Considering bioshock uses over 2.2GB of vram at 1080p, how much of that performance difference is caused by texture trashing?
What you need to make your case is to show how much IQ is lost vs the performance gained on smaller buffer cards. It's an outlier case, which I already addressed saying IQ difference would be minimal.
With that logic, there is never any point in upgrading. Next thing you know I am gaming at 1024x768 with 0AA LOW. He can now turn down setting a bit and get 60 fps on 760 in BI. On the 560Ti 448, you are going to be turning down a lot of settings to hit 60 fps in BI.
Most people haven't upgraded past 9800 GT level performance, that was offered 7 years ago. Really, turning down a few settings on a last gen card at 1440p becomes 768 with no AA on low? I guess given how ridiculous your argument is you'd have to back it up with equally ridiculous statements. You haven't shown weather it's dynamic lighting, the DOF, or texture trashing which is causing the poor performance on the last gen cards, but I imagine dropping said DOF and textures down you would lose minimal IQ (actually gain it losing DOF) and gain a considerable amount of performance with your last gen cards.
Yup, so since you are not happy with 50% more performance that 760 provides, you are suggesting he should have waited 1 more generation to get 50-70% over 760 because surely 50% faster or 70% faster over 560Ti 448 to you is not enough. Ok then, let's revisit in XX months to see how long it takes for a $250 GPU on 20nm to be 50-70% faster than GTX760.
Could be awhile if next gen is as bad or worse than this gen.
Using your logic almost everyone this generation wasted money since there were very few chances to get 50% more performance for a $130 cash outlay after reselling your old 40nm GPU. That means if you think the OP is wasting $, then everyone else on our forums threw $ into the toilet in the last 18 months. 🙄
We're all wasting cash here, there is no monetary return for our investment. It doesn't matter if you're buying a Titan or a 630, it's a sunk cost from the start if your purpose is gaming.