• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

The future of multi GPU

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
lastly - XDMA CF is near single gpu fluidity/quality. night and day compare to SLI and CF.

Definitely agree with this and was a big surprise for me as I went from 7970 XF (Terrible stuttering, effective frame rate had to be 25% or worse) > Titan SLI (Much better than 7970 XF, but still not as smooth as single card) > 290x XF (had to turn on GPU% overlays to make sure wan't running in single card mode).
 
XDMA is boss. Nvidia needs to catch up on that real quick.

I thought Nvidia did with 980s as SLI looks single card smooth (doesn't have SLI or pre XDMA XF microstutter). However, I'm using G-Sync. I just did a test and turned G-SYNC off (V-SYNC also off) and sure enough my 100FPS in Grid Autosport feels like 60FPS or worse (just not smooth as single card would be). I'd assume V-SYNC off would just have added tearing, but looks like G-Sync adds some frame pacing type help to SLI (don't have 290x XF anymore, but I think the two are equal (XF XDMA = SLI with G-Sync where as XF XDMA >> SLI without G-Sync).
 
Which leads me to my next question. At present I have GTX 970 SLI, but with the VRAM, ROP and L2 cache fiasco I am debating returning them for a refund or a credit if I'm able, and using that money towards buying the fastest single GPU card I can get to handle 1440p max quality at 60 FPS in the latest titles..

as for the second question.

unless you are planning on adding another 970 in the coming future and/or planning on increasing your current resolution beyond 2560x1440.

chances are this spec change will not affect you.
 
I honestly would definitely consider AMD at this point, but not for crossfire. AMD's power draw and DX11 driver overhead are just too high, and crossfire amplifies that.

Power consumption is a junk argument. 7-17% is literally completely unnoticeable unless you have your machine hooked up to a UPS... If you want to save 30 watts change a single light bulb to LED for $20.

Power_04.png
 
Power consumption is a junk argument. 7-17% is literally completely unnoticeable unless you have your machine hooked up to a UPS... If you want to save 30 watts change a single light bulb to LED for $20.

1) I'm on a UPS.

2) Tomb Raider is hardly GPU intensive. You may as well have posted the Bioshock Infinite power draw graph instead. When you really stress the components, the gulf widens:

index.php


14164141670QcFceujXf_10_1.gif
 
as for the second question.

unless you are planning on adding another 970 in the coming future and/or planning on increasing your current resolution beyond 2560x1440.

chances are this spec change will not affect you.

Yeah, it doesn't really affect me. To be honest, my G1 GTX 970s have performed superbly, and if anything, performance has been increasing and not decreasing.

I don't care about 4K performance, as I won't be moving to 4K until NVidia has their die shrink.

I think I will keep them for now, but as soon as I see the performance tests for GM200 and R9 300 parts, I'll make the decision to sell them and go single card again.

GM200 and R9 300 should both be fast enough to handle 1440p at very high settings, unlike the GM204 and R9 200 cards..
 
The arrival of VR (hopefully soon), should provide a huge boost for multi GPU setups.

With the way Oculus Rift renders frames (two half-frames that are then stitched together), it should be possible to get almost perfect scaling and zero stutter (at least no more stutter than a single GPU), whilst also having perfect compatibility (assuming the game supports VR in the first place of course)
 
DSR definitely works, I use it. MFAA is not an option on SLI though.

My bad, DSR does work on SLI, but it does not work with SLI + G-Sync monitor for some reason. Just having a G-Sync monitor connected I never seem to get the DSR option in control panel if SLI is enabled.
 
Multi-GPU has been a dud for years. It's only relevant in the niche circle with the best cards combined to run 4K resolutions which single cards can't manage. As far as I'm concerned, both vendors just keep up appearances with the tech so neither can get an upper hand.

But an even bigger dud is 3D. I remember the claims that nVidia's glasses were going to transform the world, yet now we don't hear a peep out of either vendor about it. Even new games have multiple rendering issues with it.
 
I would far prefer SFR rendering and then throw in Freesync/gsync and that would be perfect. unfortunately only the CIV:BE uses it, hopefully firaxis will use it in all their games going forward.

http://www.pcper.com/reviews/Graphi...mance-Maxwell-vs-Hawaii-DX11-vs-Mantle/2560x1

I had no idea Civ BE used SFR with Mantle. Looks like they got some nice results with it as well :thumbsup: Not as high a frame rate as AFR, but very nice frame times..

Perhaps SFR can be used with the Unreal Engine 4 after all..
 
But an even bigger dud is 3D. I remember the claims that nVidia's glasses were going to transform the world, yet now we don't hear a peep out of either vendor about it. Even new games have multiple rendering issues with it.

3D is likely going to take a backseat now that VR is the next big thing for PC gaming I'd wager.
 
I think that multi-gpu for AMD cards will be better due to the XDMA features. Bear in mind that its only the R9 290's (inlcuding 295x2) has this feature and so far, it is the best scaling for multi-gpu.

I could imagine that AMD cards will utilize more bandwith on the PCI lanes with the upcoming coming cards and PCIE 4.0 is already in development and this will be introduced sooner.

Of course hardware is there. Its only game developers should step up to cope up powerful hardwares
 
I think that multi-gpu for AMD cards will be better due to the XDMA features. Bear in mind that its only the R9 290's (inlcuding 295x2) has this feature and so far, it is the best scaling for multi-gpu.

I could imagine that AMD cards will utilize more bandwith on the PCI lanes with the upcoming coming cards and PCIE 4.0 is already in development and this will be introduced sooner.

Of course hardware is there. Its only game developers should step up to cope up powerful hardwares

Tonga too (R9-285)
 
I was an SLI/CF fiend for years until I started to wonder if the added latency was actually worth the frame rate boost. For me I decided it was not worth it, not to mention the occasional hassle of unsupported games or junk support. Now, if I was gaming at 4K or something the extra cards would be a no brainer.
 
My bad, DSR does work on SLI, but it does not work with SLI + G-Sync monitor for some reason. Just having a G-Sync monitor connected I never seem to get the DSR option in control panel if SLI is enabled.

Well, that sucks. If I was going for Gsync I'd go 1440p or higher anyway personally. Still, I find some games (older titles) play well at 4k.

I was an SLI/CF fiend for years until I started to wonder if the added latency was actually worth the frame rate boost. For me I decided it was not worth it, not to mention the occasional hassle of unsupported games or junk support. Now, if I was gaming at 4K or something the extra cards would be a no brainer.

I still find single cards to be not enough for me at 1440p. I suppose everyone has a different expectation of performance.
 
Ive been telling myself "ill go SLI/CF" since ~2004. I was gonna go with 6600GT SLI, never bothered... X1900XT CF, didn't do that either.

Its just never been worth it when a single next gen card typically beats or comes close to a dual gpu last gen setup. Never mind the issues/bugs and general dicking about that apparently comes with dual gpu.
 
1) I'm on a UPS.

2) Tomb Raider is hardly GPU intensive. You may as well have posted the Bioshock Infinite power draw graph instead. When you really stress the components, the gulf widens:

index.php


14164141670QcFceujXf_10_1.gif

Literally nowhere but [H] shows power consumption deltas that big. Are they still using stock 290s running at low fan speeds with high stock voltages, hitting 90c all the time? You literally can't even buy stock 290(x)s new anymore... Aftermarket ones run lower voltage, lower temperature, lower noise, lower power consumption...
 
Last edited:
Back
Top