What the hell happened in this thread? Haha.
Also for the record, Hokies stated he works for Galaxy, which is an nVidia only partner, so marketing thread? Sure, why not
Why do people even get into it? After spending a month in the P&N section (goddamn election year) I've come to almost hate all of you flag waving jack-offs. Sure, I can ignore you all, but then this place would be empty since I'd have to ignore myself at some point (LOL.)
But on the topic, are there any WoW benches? I saw Blackened (AMD shill, remember guys?) posting about performance improvements for his 680s. Since WoW is on my plate, I only care bout that (right now.)
KTHXBAI!
hardocp said:We've beaten this topic into the ground in the past, but it has been a while since we brought it up. There is a smoothness to SLI we just can't put in words. We know for a fact that NVIDIA uses an algorithm that smoothes "frametime" in SLI. We don't know what it’s called, or even how it works, but we know it exists, and we know NVIDIA employs some special sauce when it comes to SLI. It is something that can only be felt, as you play a game, it is not something that shows up in a framerate over time graph. So what you see is AMD CFX winning in framerate, but not winning in frametime or overall game smoothness.
nVidia said:Improved Frame rate Metering
Kepler introduces hardware based frame rate metering, a technology that helps to minimize stuttering. In SLI mode, two GPUs share the workload by operating on successive frames; one GPU works on the current frame while the other GPU works on the next frame. But because the workload of each frame is different, the two GPUs will complete their frames at different times. Sending the frames to the monitor at varying intervals can result in perceived stuttering.
The GeForce GTX 690 features a metering mechanism (similar to a traffic meter for a freeway entrance) to regulate the flow of frames. By monitoring and smoothing out any disparities in how frames are issued to the monitor, frame rates feel smoother and more consistent.
There's no such thing as a free lunch. Metering frames, essentially holding back for a small time to order the frames correctly, increases latency. Until AMD and Nvidia can come up with something better than the current band aid solutions for multi-GPU rendering I'm steering clear of both of them.
This is kind of annoying. AMD releases their drivers claiming improvement, and we get reviews out the wazzo. Not one site that I can find has reviewed/confirmed, the improvements for nVidia. I've only seen one (H) that used the new GeForce 310.33 drivers.
The main reason why AMD pushed so hard to have the drivers reviewed is simply because their drivers completely changed the current GPU landscape. Nvidia's new drivers are nowhere near as game changing as 12.11 is.
At 1920x1200 were seeing a roughly 5% across the board performance improvement for both the 7970 and the 7950. Everything except Starcraft II sees at least a marginal improvement here, with Starcraft II being the lone exception due to the previous issues weve run into with the 1.5 patch. The 7770 also sees some gains here but they arent quite as great as with AMDs other cards; the average gain is just 4% at 1680x1050, with gains in individual games being shallower on the 7770 than they are on other cards.
Interestingly even on the 7970 the largest gains are at 1920x1200 and not 2560x1600.
5% ? You are setting up drama that does not exist.
Nvidias frame metering really is excellent. I have used crossfire for years and at the beginning of 2012 got a pair of 7970s to find the stutter unbareble. Eventually changed to 680s and the improvement was obvious.
I thought the frame metering was only part of the GTX 690? Unless I misunderstood what I read (likely.)
AMD is a joke with multi-cards right now. Glad I only use one!![]()
5% ? You are setting up drama that does not exist.
Drama that doesn't exist? The 12.11's completely changed the performance landscape. Also, that 5% is from one reviewer with a specific set of benchmarks and IQ selections. Other reviewers saw 7-10% across the board depending on the specific AMD gpu. It may not seem like much, but when it puts you in a position to be faster than the competition at a specific price point it is a huge deal.
Nvidia needs to lower prices to make their GPU's attractive again. I'm of the opinion that they needed to lower their prices on the gtx680 before these drivers, but that's mainly because I think paying the premium for 1-2 physx titles a year is ridiculous.
Nvidia GPU's are attractive enough without them having to lower prices. A driver update or two down the road and things will be back where they were, however small a difference that might be.
PhysXInfo.com: Over last years, amount of GPU PhysX games is actually decreasing. There were five games in 2009, three in 2010 and so far only one in 2011. How can you explain that?
Tony Tamasi: It was a choice on our part. We had a large amount of resources we could otherwise dedicate to content, but we needed to advance the core technology. We needed to get PhysX 3 done, and we needed to get APEX done to the degree where it is usable by game developers. We had to put a lot of resources there, which meant that some of those resources weren’t directly working on games.
But in the long term, game developers can actually use PhysX and APEX, and make use of the GPU without significant amounts of effort, so that a year or two years from now more games will come out using GPU physics.
Rev Lebaredian: When we initially acquired Ageia, we made a big effort to move many games over to GPU PhysX. We learned a lot in that period of time: getting GPU physics into games, what are the problems, what works and what doesn’t. That gave us the opportunity to regroup, refocus, and figure out how to do it correctly.
We made a conscious decision. After we did a bunch of PhysX and APEX games in 2009 and early 2010, we said “Ok, we have learned enough, we need to sit down and focus on finishing APEX and changing it based on what we just learned, as well as PhysX 3”. Doing as many titles as we were doing before was just going to slow us down.
It made more sense to slow down the content pipeline but get the tools right, but that puts us in the position when once those are complete, it is actually less work for us to get PhysX in games.
This slowdown has not been because of any problems. It is something that we have decided to do.
As an advocate of GPU Physics, have been disappointed by the amount of titles and was hoping for 6-12 AAA titles a year. Was always curious what nVidia's view was -- on the lack of GPU Physic content?
(...)
I have been hoping for a long time that physics would get more involved in gameplay and not just some glorified bling.
But sadly I usually gets disappointed when I see a title thats been focusing on GPU physics the physics just become distracting and makes it look unrealistic, and just opposite what I want physics to do in a game.
I think Mirrors Edge did a better job of physx than the new Batman games for example, but I think that it's in driving games that I like physics advancements best. Mostly you don't see it, but you feel it, and that is what I want. Improved immersion and improved gameplay.
So lets hope (at least I do) that physx manages to push immersion and gameplay forward too. And before you beat me down, I do get that physx improves the environment in the games, it's just that they mostly overdo it so it just becomes unrealistic and distracting.