• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

[ Guru 3d ] AMD Teases PCs with Radeon Fury X2 Dual FIJI GPUs

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Stuka87

Diamond Member
Dec 10, 2010
5,481
1,269
136
I think you may be getting ahead of yourself.

Palmer Luckey uses a 970 on his system so he experiences what the majority of oculus users will experience. I expect any medium to large VR game in the next year or two to be able to run on a 970 without dropping below 90 fps.

There may be some graphical settings you can bump up with a more powerful card, but dual GPU seems like complete overkill for at least the first few years of VR. Most won't even be using DX12, let alone LiquidVR.

Here's something I just found on AMD's VR page while trying to learn more..

"For in-depth technical details on LiquidVR, go to http://developer.amd.com/liquid-vr/ "
The VR games that can run on a single GPU without frame drops look like they came out in 2010. Single GPU's have trouble playing newer games maxed out at 1080P, much less dual 1080P displays (Which is basically what VR is, resolutions vary a bit between models).
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
Guys, power in small form factors is not an issue. There are 1500w platinum flex atx psus out there, ready to have their interface modded to be used in consumer enviorments. Any server psu that does only 12v but can have a pico psu to convert the other voltages can do just fine. No need to go for the overpriced consumer ssf psu solutions.

This applies to boutique builders at least, who can afford to get an ODM offer then the modded interface (or go for an OEM solution like the gigampz ones) as they can afford asking for high quantities in their orders and get the BoM in check.
 

kami

Lifer
Oct 9, 1999
17,627
4
81
Dual GPUs are required for VR. No single card on the market can come anywhere close to running a VR setup well.

And AMD's current setup is really quite good. Easily the best out there.
Dual GPU isn't even supported yet according to Oculus because of latency issues: https://www.reddit.com/r/oculus/comments/3zjets/how_well_does_sli_play_with_oculus_many_sli/cymm7r7

They are working on a solution though.

Most VR games in development today for gen 1 are built around a 970/290. A better card will get you higher graphical settings. If you build your VR game to require $1k+ in video cards you won't be selling your game to anyone... VR is already a limited market and no one is going to limit themselves even more.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
639
126
Dual GPU isn't even supported yet according to Oculus because of latency issues: https://www.reddit.com/r/oculus/comments/3zjets/how_well_does_sli_play_with_oculus_many_sli/cymm7r7

They are working on a solution though.

Most VR games in development today for gen 1 are built around a 970/290. A better card will get you higher graphical settings. If you build your VR game to require $1k+ in video cards you won't be selling your game to anyone... VR is already a limited market and no one is going to limit themselves even more.
SLI is not the same as dual GPU. He says SLI isn't working with it. That does not at all mean that dual AMD GPU under LiquidVR isn't working...
 

pj-

Senior member
May 5, 2015
421
162
116
SLI is not the same as dual GPU. He says SLI isn't working with it. That does not at all mean that dual AMD GPU under LiquidVR isn't working...
The LiquidVR sdk was made available to the public 20 days ago. VR games are going to be a small niche of gaming. Games that properly support dual GPU through liquidvr or anything else will be a niche within that niche.

It would be nuts to me to buy a $1000 dual GPU card now in anticipation of a handful of graphically intense VR games 2+ years out. Especially when both AMD and nvidia are releasing new GPUs on a new process this year.
 
Last edited:

Headfoot

Diamond Member
Feb 28, 2008
4,444
639
126
VR is niche. I doubt that the games that bother going VR are going to ignore the VR APIs the GPU providers have made for them if they're going to go through the trouble of making the game work for VR in the first place... I bet that the Oculus and Valve VR APIs will already have vendor API support baked in by the time it launches, and if not by the time it launches, at least within the first 6-9 months post release.

My guess is that SteamVR will quickly incorporate LiquidVR and maybe GameWorksVR too and lots of games will use SteamVR. Similar story for Oculus though not as certain on their timeline or adoption by game devs
 
Last edited:

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,015
3,815
136
It was no secret that this was sooner or later coming out. However, the real BUZZ from AMD seems centered upon the Polaris chip. Dual Fiji will probably be a niche chip.
Dual anything is a niche. Those of us running any sort of Crossfire/SLI/multicard config are in the tiny minority of users (and thus have to suffer through periods of poor support).
 
Feb 19, 2009
10,457
5
76
http://www.overclock3d.net/articles/gpu_displays/playstation_vr_to_cost_299/1

Sony has already deemed that a stable 60fps is the Minimum required for VR gaming, but is pushing for 90fps to be the Standard for play as even a stable 60FPS can cause users discomfort while using VR.

As Sony says that "Going fast is not optional" when using VR, as a higher framerate and display refresh rate will counteract the side affects of motion sickness and other side effects of using VR. In VR you will notice the disconnect between your movement in real life and your movement in-game, so lowering the delay between action and visual response is crucial, making increasing the framerate a vital step when moving to VR.

These findings are very similar to the findings of Oculus and HTC/Valve, since both the Oculus Rift and the HTC Vive headsets use displays which run at 90Hz, though with such a high refresh rate it can be argued that the PS4 will likely be unable to run many visually demanding games while using VR headsets, especially given how few 1080p 60FPS games that are currently available.
I bolded the relevant part, why high frame rates are very important. It relates to motion to photon latency. ie. How fast does your visual field keep up with your head motion? If there's a 20ms+ latency, after >30 minutes or so in VR will make many people want to puke.

The problem is if graphics is rendered at 16ms (60 fps), they eat up most of that 20ms latency budget on the current frame. At 90 fps, that's still 12ms.

It's a problem Valve's VR team has talked about a lot as well as every VR developer.

This is why Liquid VR and the capability to run parallel compute & rendering is important, so that there's no bottleneck waiting for the graphics rendering to complete before starting the next (positional update) frame.



The PS4 should be really good for the job as long as the game is not graphically demanding. If it sustains 90 fps, with the 8 ACEs in the PS4 can handle async timewarp compute to reduce motion to photon latency. Down to 10 ms according to Liquid VR devs.
 
Last edited:

pj-

Senior member
May 5, 2015
421
162
116
VR is niche. I doubt that the games that bother going VR are going to ignore the VR APIs the GPU providers have made for them if they're going to go through the trouble of making the game work for VR in the first place... I bet that the Oculus and Valve VR APIs will already have vendor API support baked in by the time it launches, and if not by the time it launches, at least within the first 6-9 months post release.

My guess is that SteamVR will quickly incorporate LiquidVR and maybe GameWorksVR too and lots of games will use SteamVR. Similar story for Oculus though not as certain on their timeline or adoption by game devs
I think you may be underestimating the complexity of implementing the dual gpu aspect of liquid VR into an actual game. I'm no expert but it seems to me it needs to implemented at a very low level of the engine and keeping it efficient will require effort and thought at all levels. Its not just a checkbox you enable in some menu like hairworks.

Sli/crossfire systems are less than 2% of the gaming market, why would devs spend time on that when mid range single gpus are 90% of their potential customers?


Sli and crossfire support is pretty crap even now, what on earth makes you think dual gpu VR will be better?
 
Feb 19, 2009
10,457
5
76
Because it's not possible to achieve 90 fps 1440p x 2 on a single GPU without major sacrifices in visuals.

I'm sure most VR games will be very simplistic to ensure a 970 and R290 is capable of 90 fps.

But for "AAA" production that aim at higher visual quality, you need multi-GPU to power that experience. One GPU per eye.
 

kami

Lifer
Oct 9, 1999
17,627
4
81
Because it's not possible to achieve 90 fps 1440p x 2 on a single GPU without major sacrifices in visuals.

I'm sure most VR games will be very simplistic to ensure a 970 and R290 is capable of 90 fps.

But for "AAA" production that aim at higher visual quality, you need multi-GPU to power that experience. One GPU per eye.
By the time there are HMDs with dual 1440p screens the GPU landscape will probably change quite a bit.

For the upcoming generation with Rift and Vive I think we'll see some decent stuff visually. Yeah, 2160x1200 90fps is demanding but if they make smart choices on what to downgrade you can still get pretty good results. The Climb and Eve Valkyrie are a couple launch titles that come to mind. There is a lot of stylized, more simplistic looking games too but the VR experience is still great.
 

pj-

Senior member
May 5, 2015
421
162
116
Because it's not possible to achieve 90 fps 1440p x 2 on a single GPU without major sacrifices in visuals.

I'm sure most VR games will be very simplistic to ensure a 970 and R290 is capable of 90 fps.

But for "AAA" production that aim at higher visual quality, you need multi-GPU to power that experience. One GPU per eye.
You are exactly right, early VR games will be simplistic so they can be run adequately on 970/290. The benefits of mutli gpu in VR are very unproven at this point. If I was building a top of the line VR system now, I'd get a fury x or 980ti. Either of those will be perfectly fine for first gen games on first gen headsets. After a couple years I could take off my oculus and see whats going on in the real world. What if VR is a total flop and there's nothing but $10 indie games??
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
639
126
I think you may be underestimating the complexity of implementing the dual gpu aspect of liquid VR into an actual game. I'm no expert but it seems to me it needs to implemented at a very low level of the engine and keeping it efficient will require effort and thought at all levels. Its not just a checkbox you enable in some menu like hairworks.

Sli/crossfire systems are less than 2% of the gaming market, why would devs spend time on that when mid range single gpus are 90% of their potential customers?


Sli and crossfire support is pretty crap even now, what on earth makes you think dual gpu VR will be better?
Because VR has standard APIs, and those standard APIs will likely include the GPU Vendor VR specific APIs to the extent this is technically possible... You think SteamVR API is just going to ignore LiquidVR and GameworksVR? Valve has already written a DX11 to OpenGL wrapper. I guarantee you they will wrap and passthrough (at least some) useful functions from the GPU APIs into SteamVR API so that devs only have to learn the one. I can't say whether Oculus will do the same but I suspect they will. The question in my mind is "when," whether it will be too little too late or on day 1.

I don't disagree that not a lot of devs will dive deep into those APIs but im fairly certain they will be included in SteamVR and that any dev that uses SteamVR will use by default whatever portion of the lower level APIs are incorporated into it.
 
Last edited:

Headfoot

Diamond Member
Feb 28, 2008
4,444
639
126
I don't disagree that not a lot of devs will dive deep into those APIs but im fairly certain they will be included in SteamVR and that any dev that uses SteamVR will use by default whatever portion of the lower level APIs are incorporated into it.
Looks like im right.
Quote from AMD_Robert during the recent AMA on Reddit.
AMD_Robert said:
Q: Having each eye handled by a separate D-GPU would be of greatest benefit in VR, so is there going to be an increased focus on CrossFire and LiquidVR support in preparation for big VR launches?
A: This is something we're intensely interested in. LiquidVR SDK has a feature called Affinity Multi-GPU, which can allocate one GPU to each eye as you suggest. Certainly a single high-end GPU and/or dynamic fidelity can make for a good VR experience, but there are gamers who want unequivocally the best experience, and LVR AMGPU accomplishes that. As a recent good sign of adoption, the SteamVR Performance Test uses our affinity multi-GPU tech to support mGPU VR for Radeon, whereas SLI configs are not supported.
Like I said, its pretty obvious that the VR headset makers have a vested interest in using every feature the GPU makers can provide, incorporating them into their own APIs. Why Gameworks VR / SLI isn't in yet, I don't know, but the headset hasnt dropped yet and I have every expectation it will be in at launch.
 

pj-

Senior member
May 5, 2015
421
162
116
PC Graphics Engineering Manager @ Oculus said:


At the end of the thread he also states that liquid VR and gameworks VR do not fix this problem.

Seems like SLI/CF are going to be pretty useless for this first generation of VR. Well, useless with any GPUs that exist today.

I wonder if that latency is lower with dual GPU cards..
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,015
91
https://www.reddit.com/r/oculus/comments/3zjets/how_well_does_sli_play_with_oculus_many_sli/cymm7r7

At the end of the thread he also states that liquid VR and gameworks VR do not fix this problem.

Seems like SLI/CF are going to be pretty useless for this first generation of VR. Well, useless with any GPUs that exist today.

I wonder if that latency is lower with dual GPU cards..
AMD was saying that the Steam VR Test uses Liquid VR and thats why you get great support from CFX in it.

They also do 1 eye on each card.

https://youtu.be/cx370pvDTrw?list=PLx15eYqzJifdWZk4ZncBni1bQVJNM40lC&t=80
 
Last edited:

pj-

Senior member
May 5, 2015
421
162
116
AMD was saying that the Steam VR Test uses Liquid VR and thats why you get great support from CFX in it.

They also do 1 eye on each card.

https://youtu.be/cx370pvDTrw?list=PLx15eYqzJifdWZk4ZncBni1bQVJNM40lC&t=80
I understand, I was more responding to this:

Like I said, its pretty obvious that the VR headset makers have a vested interest in using every feature the GPU makers can provide, incorporating them into their own APIs. Why Gameworks VR / SLI isn't in yet, I don't know, but the headset hasnt dropped yet and I have every expectation it will be in at launch.
This is clearly not the case for oculus, which seems like it will be the better selling of the two upcoming PC HMDs.

I think the number of people with HTC Vive and a dual GPU setup will be such a small niche that even if developers support it, there will be very little incentive for them to increase fidelity to levels that will saturate both cards.

It's not surprising that 1 GPU per eye CF scales well, but what does it do to the latency? There's still only one cable going to the card so the inter-gpu frame copy described by the oculus guy occurs no matter how well it scales.

That said, I preodered the vive over the rift and would be happy if SLI/CF were widely supported and implemented well, even though I plan to remain a single GPU fella.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
I understand, I was more responding to this:



This is clearly not the case for oculus, which seems like it will be the better selling of the two upcoming PC HMDs.

I think the number of people with HTC Vive and a dual GPU setup will be such a small niche that even if developers support it, there will be very little incentive for them to increase fidelity to levels that will saturate both cards.

It's not surprising that 1 GPU per eye CF scales well, but what does it do to the latency? There's still only one cable going to the card so the inter-gpu frame copy described by the oculus guy occurs no matter how well it scales.

That said, I preodered the vive over the rift and would be happy if SLI/CF were widely supported and implemented well, even though I plan to remain a single GPU fella.
It's not Crossfire, it's Affinity Multi-GPU. Each GPU is dedicated to one eye and renders the same frame. Meaning that the solution is not using AFR (Alternate Frame Rendering) because it is not rendering to a single display.

The net result is that there is no added latency with the Affinity Multi-GPU solution. You end up with better frame rates and a lower latency. The LiquidVR solution uses two separate driver threads (two separate CPU cores) for the process. Both GPUs are synchronized and then draw the same workload separately each using its own CPU core to feed it. One CPU core works with one LiquidVR driver thread to display to the right eye and a separate CPU core works with the other LiquidVR driver thread to display to the left eye. On top of this, Asynchronous compute + graphics is used on both GPUs in order to execute compute and graphics tasks in parallel on their respective GPU. The net result is a significant reduction in latency.

 
Last edited:

pj-

Senior member
May 5, 2015
421
162
116
It's not Crossfire, it's Affinity Multi-GPU. Each GPU is dedicated to one eye and renders the same frame. Meaning that the solution is not using AFR (Alternate Frame Rendering) because it is not rendering to a single display.

The net result is that there is no added latency with the Affinity Multi-GPU solution. You end up with better frame rates and a lower latency. The LiquidVR solution uses two separate driver threads (two separate CPU cores) for the process. Both GPUs are synchronized and then draw the same workload separately each using its own CPU core to feed it. One CPU core works with one LiquidVR driver thread to display to the right eye and a separate CPU core works with the other LiquidVR driver thread to display to the left eye. On top of this, Asynchronous compute + graphics is used on both GPUs in order to execute compute and graphics tasks in parallel on their respective GPU. The net result is a significant reduction in latency.

But there is still only one physical link from the computer to the headset. No matter how independently the GPUs operate, the secondary GPU still needs to send its half of the image to the primary GPU where it is combined into the full frame and sent out.

I am not saying it's a problem because I don't know enough about it. I'm only relaying my understanding of the opinion of the "PC Graphics Engineering Manager" at oculus who says it adds latency, even with LiquidVR and is therefore disabled when using an oculus rift.

Edit: In your image it appears to be the LVR.Transfer (GPU1->GPU0) step. No indication on what time scales are involved there..
 
Last edited:

Mahigan

Senior member
Aug 22, 2015
573
0
0
But there is still only one physical link from the computer to the headset. No matter how independently the GPUs operate, the secondary GPU still needs to send its half of the image to the primary GPU where it is combined into the full frame and sent out.

I am not saying it's a problem because I don't know enough about it. I'm only relaying my understanding of the opinion of the "PC Graphics Engineering Manager" at oculus who says it adds latency, even with LiquidVR and is therefore disabled when using an oculus rift.

Edit: In your image it appears to be the LVR.Transfer (GPU1->GPU0) step. No indication on what time scales are involved there..
That's the first step,

The image isn't combined, each GPU is assigned a render task based on what its designated eye can see. Then the tasks are rendered separately. No image combination is required, your eyes do the image combination (just like how your eyes, and brain, operate in real life).

Close your right eye, then your left eye. You see two parts of an image. What combines these two parts into a single image? Your brain. That's the same concept behind LiquidVRs Affinity Multi-GPU.
 
Feb 19, 2009
10,457
5
76
No matter how independently the GPUs operate, the secondary GPU still needs to send its half of the image to the primary GPU where it is combined into the full frame and sent out.

Edit: In your image it appears to be the LVR.Transfer (GPU1->GPU0) step. No indication on what time scales are involved there..
In affinity mGPU, you dedicate one GPU to one visual output/eye. It doesn't have to send half of the image to the primary GPU, you are thinking in terms of AFR limitations or even SFR, where the final image is composited and sent out by the primary GPU.

This is why there's an API specific for this that bypasses traditional limitations of rendering and output to a monitor.

Good VR games that are made for long play session, will have to use these APIs, else the constant ~35ms (tomshardware did a test on this last December) or great motion to photon latency will cause major nausea. Before people say it's not a problem, big players in the industry has talked about why sub 20ms motion/photon latency is important for reducing motion sickness. Including Valve. There's plenty of presentations at conferences on youtube about this topic for VR.

Heck, even NVIDIA says it:

http://www.geforce.com/whats-new/articles/maxwell-architecture-gpus-the-only-choice-for-virtual-reality-gaming

The standard VR pipeline from input in (when you move your head) to photons out (when you see the action occur in-game) is about 57 milliseconds (ms). However, for a good VR experience, this latency should be under 20ms.
 
Last edited:

pj-

Senior member
May 5, 2015
421
162
116
I am confused. Are you guys talking about future headsets or the ones that will be coming out in the next couple months?

For Rift/Vive there is absolutely image combining. They have one video cable that is plugged into one GPU. If the headset is plugged into GPU 1, how is GPU 2's output getting to the display if not being sent over PCI/SLI bridge and combined with the output of GPU 1?
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,015
91
I am confused. Are you guys talking about future headsets or the ones that will be coming out in the next couple months?

For Rift/Vive there is absolutely image combining. They have one video cable that is plugged into one GPU. If the headset is plugged into GPU 1, how is GPU 2's output getting to the display if not being sent over PCI/SLI bridge and combined with the output of GPU 1?
Maybe thats what Fury X2 is all about, since its single card and they've delayed its launch to match with those of the headsets.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
I am confused. Are you guys talking about future headsets or the ones that will be coming out in the next couple months?

For Rift/Vive there is absolutely image combining. They have one video cable that is plugged into one GPU. If the headset is plugged into GPU 1, how is GPU 2's output getting to the display if not being sent over PCI/SLI bridge and combined with the output of GPU 1?
Oh,

Now I understand your question better. Yes, the frame is sent to the primary GPU (using extreme low latency XDMA) prior to it going to the headset but the image isn't combined. Two separate streams of data are transfered over the headsets cable.

If the solution was straight up Crossfire/SLI, then you would need image combining because two full frames would be sent to the headset. With Affinity Multi-GPU both GPUs only render what their respective eye would see (Affinity GPU masking), so your brain does the combining.
 
Last edited:

pj-

Senior member
May 5, 2015
421
162
116
Oh,

Now I understand your question better. Yes, the frame is sent to the primary GPU (using extreme low latency XDMA) prior to it going to the headset but the image isn't combined. Two separate streams of data are transfered over the headsets cable.

If the solution was straight up Crossfire/SLI, then you would need image combining because two full frames would be sent to the headset. With Affinity Multi-GPU both GPUs only render what their respective eye would see (Affinity GPU masking), so your brain does the combining.
Your image seems do describe exactly what I'm saying. Each gpu renders one eye (aka 1080x1200, one half of a full frame), gpu2 sends its half to gpu1, the halves are combined, then the latest rotational data is polled from the headset, timewarp is applied, and the finished frame is sent to the headset.

I have seen nothing that indicates oculus/vive can operate how you are describing.
 

ASK THE COMMUNITY