Yeah, it's all just me Willie. I'm at war with myself. I was always a self-conflicted individual longing for a sense of smoothness to my gaming experiences. It isn't a focus group thing. It's my thing. :thumbsup:
This is something we will eventually look at, but as I've mentioned before it's not something I want to do until I have the right tools (which I am expecting sooner than later). Scott has done an amazing job with FRAPS, but that NVIDIA quote isn't wrong.
To really go at this I'd like to be able to time frame updates at a low level (i.e. the actual buffer swaps), but also keep track of a frame's time in the rendering pipeline. Ideally we need timestamps both for the buffer swap and the simulation itself. This is to check the rate at which frames are being displayed and the rate they're being generated. Both of these factors need to be consistent if you want to maximize smoothness. E.G. if frames are being displayed every 30ms but the simulation is generating them at a far more variable rate (say 15ms and 45ms repeating) then it's not going to be particularly smooth, just as if the buffer swaps are taking place at an uneven interval.
In the meantime however I'm personally satisfied with both AMD and NVIDIA in this regard; in my own personal experience I don't believe either of them to be notably worse than the other when it comes to single-GPU configurations. At the same time this community has a terrible habit of making mountains out of molehills, so if there's a "big war" brewing then I fear you guys might be taking this whole subject a bit too seriously. Spend less time looking at charts and more time playing video games, it's not like there's a shortage of good action games this year.
In regards to Skyrim, here are my results with Fraps. 12.11 Beta 8 drivers:
This is something we will eventually look at, but as I've mentioned before it's not something I want to do until I have the right tools (which I am expecting sooner than later). Scott has done an amazing job with FRAPS, but that NVIDIA quote isn't wrong.
To really go at this I'd like to be able to time frame updates at a low level (i.e. the actual buffer swaps), but also keep track of a frame's time in the rendering pipeline. Ideally we need timestamps both for the buffer swap and the simulation itself. This is to check the rate at which frames are being displayed and the rate they're being generated. Both of these factors need to be consistent if you want to maximize smoothness. E.G. if frames are being displayed every 30ms but the simulation is generating them at a far more variable rate (say 15ms and 45ms repeating) then it's not going to be particularly smooth, just as if the buffer swaps are taking place at an uneven interval.
In the meantime however I'm personally satisfied with both AMD and NVIDIA in this regard; in my own personal experience I don't believe either of them to be notably worse than the other when it comes to single-GPU configurations. At the same time this community has a terrible habit of making mountains out of molehills, so if there's a "big war" brewing then I fear you guys might be taking this whole subject a bit too seriously. Spend less time looking at charts and more time playing video games, it's not like there's a shortage of good action games this year.
Outside, I did soultrap a few mudcrabs along my path. Both runs, second run also had a mountain lion.
I think this helps confirm my suspicions that tech reports new numbers seem to be more an issue with their new test bench (windows 8 computer) as your results match what they were getting before and you use a more demanding run.
It still makes sense to clear up the issue of whether Radeon cards have more jitter whether or not it is due to fanboy partisan motives.Keys, I like your posts and am happy you are part of these forums, Nvidia focus group or not. I think often you get flak when it is completely not deserved. But I can't help but feel like this is an attempt to get Ryan to write an article that will show Nvidia in a better light than the competition. Just look at it from my perspective, a poster associated with Nvidia trying to have an article written that more or less shows frame rates don't matter, Nvidia's smoothness matters at a time when Nvidia is producing lower frame rates at most every price point (we all know how the internet works, "zomg! Nvidia is smoother despite lower frame rates!" and people will think that this applies to single cards, too).
Maybe I'm 100% wrong here, it is an article I would read, but it seems odd that this request was made when there really isn't too big of a 'war' over this.
This is something we will eventually look at, but as I've mentioned before it's not something I want to do until I have the right tools (which I am expecting sooner than later). Scott has done an amazing job with FRAPS, but that NVIDIA quote isn't wrong.
To really go at this I'd like to be able to time frame updates at a low level (i.e. the actual buffer swaps), but also keep track of a frame's time in the rendering pipeline. Ideally we need timestamps both for the buffer swap and the simulation itself. This is to check the rate at which frames are being displayed and the rate they're being generated. Both of these factors need to be consistent if you want to maximize smoothness. E.G. if frames are being displayed every 30ms but the simulation is generating them at a far more variable rate (say 15ms and 45ms repeating) then it's not going to be particularly smooth, just as if the buffer swaps are taking place at an uneven interval.
In the meantime however I'm personally satisfied with both AMD and NVIDIA in this regard; in my own personal experience I don't believe either of them to be notably worse than the other when it comes to single-GPU configurations. At the same time this community has a terrible habit of making mountains out of molehills, so if there's a "big war" brewing then I fear you guys might be taking this whole subject a bit too seriously. Spend less time looking at charts and more time playing video games, it's not like there's a shortage of good action games this year.
This is something we will eventually look at, but as I've mentioned before it's not something I want to do until I have the right tools (which I am expecting sooner than later). Scott has done an amazing job with FRAPS, but that NVIDIA quote isn't wrong.
To really go at this I'd like to be able to time frame updates at a low level (i.e. the actual buffer swaps), but also keep track of a frame's time in the rendering pipeline. Ideally we need timestamps both for the buffer swap and the simulation itself. This is to check the rate at which frames are being displayed and the rate they're being generated. Both of these factors need to be consistent if you want to maximize smoothness. E.G. if frames are being displayed every 30ms but the simulation is generating them at a far more variable rate (say 15ms and 45ms repeating) then it's not going to be particularly smooth, just as if the buffer swaps are taking place at an uneven interval.
In the meantime however I'm personally satisfied with both AMD and NVIDIA in this regard; in my own personal experience I don't believe either of them to be notably worse than the other when it comes to single-GPU configurations. At the same time this community has a terrible habit of making mountains out of molehills, so if there's a "big war" brewing then I fear you guys might be taking this whole subject a bit too seriously. Spend less time looking at charts and more time playing video games, it's not like there's a shortage of good action games this year.