• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[VC]NVIDIA GeForce GTX 980, GTX 980 SLI, GTX 970, 3DMark performance

Page 17 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I am hearing $699 CAN for 980. No idea about performance. My thought is the same as before; a little faster at 1080p, indistinguishable at 2560x and slower at 4K. That cooler on the 980 is probably adding at least $50 to the price. Even with the neglibile performance increase packaging it up like a Titan will help to sell it as a flagship.
 
Looks like nvidia decided to keep the SLI bridges instead of implementing a PCIE based access setup like AMD. I wonder if they didn't have enough time or if they couldn't get it working properly.
What is better? A dedicated high-speed interface that's only used to transfer the final image, or a shared bus that's also used to feed the GPU with data?

The biggest benefit of not needing bridge, is that you don't need a bridge. :big grin:

Not saying it's broken, but let's face it, recent reviews @ 4k show that AMD's new implementation works better for UHD. Nothing wrong with striving for improvements
Fundamentally, this is no reason why a bridgeless configuration is better performance. There may better other reasons why GPUs without a bridge are being better at it. Though it may be very hard to proof one way or the other.
 
Last edited:
I was just going off of reviews and what hardocp found intheir review of the 295xe vs 780 sli

The State of Smoothness and Frametimes

Before we dive into the questions above, there is one topic that has been burning on our minds to talk about since we've been testing a lot of SLI and CrossFire lately. We need to discuss where we are in terms of SLI and CrossFire smoothness and frametimes. That old issue about choppy or stuttering gameplay under SLI and CrossFire which we have been addressing subjectively for many years now.

In the past it was commonplace to complain about stuttering or choppiness with any AMD CrossFire solution. Times have changed, and we actually find that the roles have now reversed, at least at 4K. AMD has introduced its frametime averaging technology and is fully implemented on AMD R9 290X. AMD R9 290X also introduced new CrossFire technology that does not require a bridge atop the video card and improves communication through the PCI-Express bus. These improvements have proven to be successful in reducing the awful stuttering AMD used to be known for with CrossFire.

In all of our gaming we have shown you today, in every single game AMD CrossFire feels smoother to us than NVIDIA SLI. That's right, the tables have turned on this issue. In fact we experienced many situations where there was choppiness or stuttering with the two ASUS STRIX cards in SLI. It was noticeable, and when we switched to AMD R9 290X CrossFire; CrossFire just felt smoother.

One example of this is in Crysis 3. When we ran 4X MSAA at 3840x2160 with "High" settings we had a smooth experience with ASUS R9 290X CrossFire. However, with ASUS STRIX 780 6GB SLI we had a definite stutter, non-smooth experience despite what the framerates read and despite having 6GB of VRAM. This is a case where the framerates were reading what looked to be playable, at times in the 40's of FPS yet the actual gameplay felt choppy, like it was under 30 FPS! This is exactly the type of phenomenon we used to experience with AMD CrossFire.

Another example of this also happened in Far Cry 3 even running at just 2X MSAA "Ultra" settings. Our experience was altogether smoother with AMD CrossFire. It was as if the numbers we were seeing in FRAPS wasn't matching what we actually felt in game with SLI and this is exactly the way it use to be with AMD CrossFire.

Yet another example is the actual frametime results we have from BF4. Since we are now running that game under Mantle on AMD cards we have to use its built in frame time recorder and from that derive the average framerate. This displays to us the actual frametimes of the game. We often find that the AMD CrossFire frametimes are less than (better) than the NVIDIA SLI frametimes in the game. Granted we are using Mantle on AMD cards, but we have seen that to be true.

Today we can confidently state that AMD CrossFire is going to give you the better gameplay experience than SLI at 4K.

source: http://www.hardocp.com/article/2014...r9_290x_4gb_overclocked_at_4k/11#.VBYrXEjbYy4
 
Given the memory bus width, it's hard to imagine that GTX 980 is going to match or beat the existing GK110 cards at 4K. There are enough improvements in Maxwell's memory controller that it will probably hold its own in 1440p and 1600p, but for 4K and probably triple-monitor surround, Nvidia would have had to pull a real rabbit out of their hat for it to keep up, given the massive bandwidth requirements for pushing that many pixels.

Given that, I'm still not buying the $599+ claims. Hardcore gamers who spend that much on video cards are the kind of people who are likely to have multiple monitors and/or 4K. And they tend not to care all that much about power efficiency, at least if it means a sacrifice in performance. Unless GTX 980 can beat GTX 780 Ti in all resolutions - including 4K and 5760x1080 - then it is not going to be able to sell well above $499. Not when cheaper Radeon cards already have an edge at ultra-HD resolutions.

I still think that the GTX 960 is the card to look out for. Given Maxwell's efficiency and low cost of production, it could beat R9 280X in performance and absolutely crush it in terms of power consumption, at the same price point ($299).
 
What is better? A dedicated high-speed interface that's only used to transfer the final image, or a shared bus that's also used to feed the GPU with data?

The biggest benefit of not needing bridge, is that you don't need a bridge. :big grin:


Fundamentally, this is no reason why a bridgeless configuration is better performance. There may better other reasons why GPUs without a bridge are being better at it. Though it may be very hard to proof one way or the other.

So, which setup offers greater overall bandwidth? I'm assuming you have the specs.
 
Nvidia invested a lot of R&D in their special bus for Pascal (?) so I can see why they'd just ride with the current SLI system for a bit longer. I think greater than 1080P performance will be the most interesting thing for reviewers to examine with big(ger) Maxwell.
 
So, which setup offers greater overall bandwidth? I'm assuming you have the specs.
If a HW bridge of a slave SLI board feeds straight into the display controller of the master SLI board, the only BW required is the BW needed to send over an image. This would be a simple back-pressured operation: slave GPU indicates that there is data available, master GPU indicates that a pixel should be popped from the interface. Doesn't have to be more complicated than that.

In the case of, say, 120Hz at 1080p at 24bpp, that'd be ~6Gbps.

This kind of BW is also available on any decent PCIe interface.

The BW required should never be the problem in either solution, and cannot explain the fact that current Radeon cards are better at multi-GPU than GeForce.

It is more plausible that there is another factor that's making CF better today.
 
AMD had bandwidth issues with crossfire over the crossfire bridges. Maybe SLI doesn't need as much???

As far as one card being a slave only, considering you can run monitors from both cards in SLI I'm not sure it works the way you are saying.
 
AMD had bandwidth issues with crossfire over the crossfire bridges. Maybe SLI doesn't need as much???
There most be some kind of limit for SLI of course. I have no idea what it is.

As far as one card being a slave only, considering you can run monitors from both cards in SLI I'm not sure it works the way you are saying.
Maybe there are 2 interface, one back, one forth. How knows...
 
I would agree with that review. 290s in CF seem to be smoother at higher resolutions. I had 780s before and on more demanding titles like Crysis 3 where my FPS can still drop to around 30-ish, it feels less choppy.

Would you say the same applies to lower resolutions say 1080p but with AA maxed out?
 
Would you say the same applies to lower resolutions say 1080p but with AA maxed out?

Would depend on the AA method used. If you used SSAA for instance then yes because all that does is multiply the internal resolution.

More commonly used forms of AA such as MSAA and FXAA I'm not sure. At 1080p, even with 8x MSAA I'd say it would be hard to get a R290 CF or GTX780 SLI setup to drop into the 30s, so testing for this would be difficult.
 
I like what I'm seeing with the 2x6 pin connectors. The die size looks smaller too.

So we know it uses less power than a 780/Titan; the only question now is performance.
 
I'll most likely bite the bullet if I can get at least 15% faster than a 780Ti in 4K/Eyefinity res for around $500, with multi monitor support at least as good as AMD's. It's a tall order, but if not it shouldn't be too much more of a wait for 20nm and my 7970 is still holding up just fine.

Chances are I'll wait for AMD's 20nm chip because of a mixture of Nvidia price gouging (at least, moreso than AMD does), not wanting to take the risk of potentially even worse multi-monitor support (not satisfied at all with Eyefinity, but better the devil you know), and true audio. I don't care about Mantle or PhysX, Gsync vs. Freesync seems like a wash, but true binaural 3D sound is something worth spending money on -- especially for a game like star citizen where you absolutely need the ability to distinguish vertical sound sources.
 
Last edited:
Don't know if correct or not:-
price.jpg

http://wccftech.com/nvidia-geforce-...-features-nvttm-cooler-backplate-updated-pcb/

Highly likely performance isn't as good as GTX780Ti though.
Source:
 
I like what I'm seeing with the 2x6 pin connectors. The die size looks smaller too.

So we know it uses less power than a 780/Titan; the only question now is performance.

Its between GK104 and GK110 in die size, right smack in middle.

It's quite a big mid-range in that sense.
 
150W for GTX970 is what I expected. And looks like a good card to buy.

180W for the GTX980 is rather amazing if true. I had expected a bit more.
 
Who cares about power consumption if it's slow (even if it's not, in the "top performance" card)?

I'd take a 500w card if the performance is there. Probably even more, it's the last thing I look at and only care if it's extreme.

If you guys care so much about power consumption why didn't you have an AMD card when the fermi furnaces were there?

I hope this isn't another NV neutered card without any voltage control and an artificial NV voltage limit to make you pony up the the "980 ti" or "1080p ti" when it comes, by not allowing you to OC too much.
 
The market cared based on when nVidia offered their first iterations of Fermi -- they continued to lose share to AMD!

Performance/watt may not be the most important metric to all but still important, imho!
 
The market cared based on when nVidia offered their first iterations of Fermi -- they continued to lose share to AMD!

Performance/watt may not be the most important metric to all but still important, imho!

I's hypocritically funny that when Fermi 1.0 came out, it was blasted nonstop and continuously for it's inferior perf/w and temperatures. People lambasted how a gtx480 would heat an entire room after an hour or two of gaming. The complaints continued until gtx580 upped the performance, dropped the thermals, and slightly lowered power consumption.

Absolutely none of those same people complain about the exhaust temps or power consumption of the r290x and r290. People will forever continue to spin and justify a particular brand or vendor no matter the situation.
 
I hope this isn't another NV neutered card without any voltage control and an artificial NV voltage limit to make you pony up the the "980 ti" or "1080p ti" when it comes, by not allowing you to OC too much.


It's very likely that Nvidia will continue with their Green Light program. If these cards are anything like the 750 Ti, then I would expect them to hit 1300Mhz or so with the small voltage adjustment. Keep in mind with people like Skyn3t, getting over the power limit and voltage limitations implemented by Nvidia shouldn't take that long. I speculate that 1400-1500Mhz on custom air should be very doable on these cards...
 
I's hypocritically funny that when Fermi 1.0 came out, it was blasted nonstop and continuously for it's inferior perf/w and temperatures. People lambasted how a gtx480 would heat an entire room after an hour or two of gaming. The complaints continued until gtx580 upped the performance, dropped the thermals, and slightly lowered power consumption.

Absolutely none of those same people complain about the exhaust temps or power consumption of the r290x and r290. People will forever continue to spin and justify a particular brand or vendor no matter the situation.

And on the other side of the coin, those that used the 470/480 sit here and pretend like a few watts suddenly mean the difference between buying a cheaper and faster card when their spons..., I mean brand is slightly more efficient. :thumbsdown:

It doesn't matter the issue, the goalpost shifting sucks.

I care about performance and price, followed by the other criteria (noise etc.). Everyone has an opinion, it's just annoying when they keep "changing" them to follow the latest trend (and the non stop trying to hype it).

Anyways, hopefully the 980 has the performance we crave and a reasonable price. I'm on the fence whether I think it might have some (compression) magic and be another 30% boost over the 780 ti, but on the other hand it could be slower especially at high resolution since there are too many pixels to compress.
 
I's hypocritically funny that when Fermi 1.0 came out, it was blasted nonstop and continuously for it's inferior perf/w and temperatures. People lambasted how a gtx480 would heat an entire room after an hour or two of gaming. The complaints continued until gtx580 upped the performance, dropped the thermals, and slightly lowered power consumption.

Absolutely none of those same people complain about the exhaust temps or power consumption of the r290x and r290. People will forever continue to spin and justify a particular brand or vendor no matter the situation.

And dont forget these fun videos from AMD. :awe:
 
Back
Top