WaitingForNehalem
Platinum Member
- Aug 24, 2008
- 2,497
- 0
- 71
More testing with both Windows 7 and Windows 8: http://techreport.com/review/24022/does-the-radeon-hd-7950-stumble-in-windows-8
More testing with both Windows 7 and Windows 8: http://techreport.com/review/24022/does-the-radeon-hd-7950-stumble-in-windows-8
Thread title needs to change. 7950 vs 660Ti, not 7970.
Apparently NV has some sort of smoothness advantage for single-GPU, not just multi-GPU, at least for Skyrim but likely for other games as well. This isn't terribly surprising to me. I would like to see Cat 12.8 used to see if that helps with the smoothness, as I suspect Cat 12.11 was rushed out, or maybe AMD even knowingly traded off smoothness in frametime for a few more frames per second, knowing that most people just look at fps.
Thread title needs to change. 7950 vs 660Ti, not 7970.
More testing with both Windows 7 and Windows 8: http://techreport.com/review/24022/does-the-radeon-hd-7950-stumble-in-windows-8
![]()
this is unacceptable for this level of VGA, 50ms? that's some horrible stutters, no need for a video...
they should have tried lowering details, vsync, manual framerate limit and maybe a different platform/CPU.
hitman928 said:Here's what I don't understand and why I wanted them to revisit the same scene with the new system:
Time spent beyond 16.7ms ("whiterun" scene)
First Review:
660Ti 132
7950 152
Follow-up:
660Ti 30
7950 259
And time spent beyond 50ms from first review ("whiterun" scene)
660Ti 0
7950 0
Their conclusion of the data: "Interesting. There isn't much change from our older review"
...?
To be fair, the numbers they point at are 99th percentile, but do they really not see the massive change in 7950 latency? You have a 70% increase for time spent over 16.7ms. Also, they're percentile graph has the 7950 and 660Ti basically swapping places. They also are the only review I have found thus far (maybe there's another?) that shows a decrease in fps from 12.7 to 12.11 catalyst despite using a card with a higher boost clock?
I'm not saying we should ignore these results as obviously AMD is even looking into them, but can anyone show me any other review of a single card configuration that in any way corroborates what techreport is getting? Until then I will treat this as I always treat a single review site showing something different than everywhere else I look, an anomaly. If there is any other place showing the same results, I'd love to see it. . .
@ blastingcap
Did you advocate these kind of tests? if not what's your idea to improve it?sorry didn't read the whole thread so not sure if you already posted that
Oh you mean when NVDA was found cheating on 3D Mark?i would definitely have tried a fps limiter and see what happens. It may smooth things out. As for the different platform and CPU I can only say that an i7 3820 should be plenty CPU for either of these cards but perhaps an overclocked 3570k on a z77 would show a different result with higher clock speed and PCIe 3.0(minor difference though it may be)?
I wouldn't rule that out, similar things have happened before in the past. I remember a few times where image quality suffered in order to pull ahead in a few benchmarks. Around the time when Quake 3 was popular.
I want to see more examples... Skyrim is just one example and you can't make a conclusion on it.
*snip*
Once again, where is this frames per second differential in Tech Report's review? If the card is producing low frame rates, the frame times will be higher. There is still a relationship between them. How in the world did GTX660Ti not only make up nearly 40% deficit but is now beating an HD7950 in Skyrim at 2560x1400 + 4xMSAA? I can't explain that, I really can't.
TR's review is raising more questions:
1) HD7970/Ghz cards exhibited no such issues in June
2) HD7950 mysteriously gets creamed in Skyrim by GTX660Ti at high resolution with AA, contrary to 95% of reviews out there.
3) Older AMD cards like HD6970 often beat GTX570 in smoothness.
In other words, starting to generalize much from this review of HD7950 vs. GTX660Ti outside of those 2 cards and those specific games tested is not exactly confidence inspiring without other professional reviewers also vetting this data.
But I have a point though, suppose your card is generating frames @ x fps and you capture that using a camera which can shoot @ 2x fps.Now I would be looking at the same frame twice, how would I distinguish between them?
4xAA vs 8xAA maybe. 8xAA is very hard on bandwidth.
Also indoor vs outdoor - they can have quite different performance characteristics.
RS, where did you get that graph? What I see is this:
You can't compare different driver (especially beta) versions, for stuttering or other issues, that's the whole point. Is something sacrificed for a FPS counter result. In the past that might have been flickering or disappearing textures, now it may be some other optimization. Which could come from either competing company, in theory.
