• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

What does the second GPU of the XGI Volari card actually do?

It seems like its actually running at same speeds as the other because of the heat TOm's Hardware mentioned it gives off, but I still get the feeling that its something with a small purpose just labeled as a GPU for marketing...

How exactly do it handle two GPU's efficiently?
 
It's not two different GPU's... it's just like having two CPU's... they're supposed to share the work, or one does one thing while the other does another thing.
 
It's a little different than the Voodoo2 SLI mode. In SLI, each card was rendering every other line of the frame (so each card was doing the work on half of each frame).

With the Volari, if I'm not mistaken, each GPU is rendering every other frame.

I'm not sure which method would be theoretically more efficient. Judging by the early benchmarks of the Volari though, it has potential if they can get the drivers working better.
 
Originally posted by: PinwiZ
It's a little different than the Voodoo2 SLI mode. In SLI, each card was rendering every other line of the frame (so each card was doing the work on half of each frame).

With the Volari, if I'm not mistaken, each GPU is rendering every other frame.

I'm not sure which method would be theoretically more efficient. Judging by the early benchmarks of the Volari though, it has potential if they can get the drivers working better.

Yes I've heard it uses AFR also (alternate frame rendering).

I think SLI is a bit more efficient because it really doubles the processing power. AFR has some inefficiencies with synching the frames perfectly.

However it does still more or less double GPU power by adding a second one like this.
 
Jiffy is right about the AFR and trouble synching frames resulting in uneven frame rates.

SLI was Scan Line Interleave, each GPU rendering every other line, as Rage noted.

 
I would think (knowing a lot about programming and a bit about circuit design) that an SLI design would be more complex, since then you have to have two CPUs writing to the same frame buffer in sync, which would require some tricky synchronization in *hardware*.

With AFR, if the CPUs are rendering at different rates you can just delay the updates from the faster one -- it shouldn't be much tougher than doing regular vsync.
 
doesn't nvidia own the patents to dual-scalable GPU graphics cards, after they bought out the skeleton of 3dfx. well then again doubt they could win a court case as dual processing has been around for quite a while

XGI just another s3/via/savage3d? then again the XGI dual card would blow away anything s3 had, and intel as well
 
Originally posted by: ReiAyanami
doesn't nvidia own the patents to dual-scalable GPU graphics cards, after they bought out the skeleton of 3dfx. well then again doubt they could win a court case as dual processing has been around for quite a while

XGI just another s3/via/savage3d? then again the XGI dual card would blow away anything s3 had, and intel as well

I doubt it seeing as this is more like the technology ATI used (AFR) than that which 3dfx used (SLI).
 
As Lars' preview at THG indicated, the V8U uses AFR. I believe on disadvantage of AFR over SLI is a slight lag introduced by one GPU possibly waiting on another. Another interesting problem raised at B3D is with multi-pass shaders. Does one GPU finish its passes, stalling the other? If so, does this make AFR possibly much less efficient with newer, multi-pass-shader-heavy titles?
 
Originally posted by: Pete
As Lars' preview at THG indicated, the V8U uses AFR. I believe on disadvantage of AFR over SLI is a slight lag introduced by one GPU possibly waiting on another. Another interesting problem raised at B3D is with multi-pass shaders. Does one GPU finish its passes, stalling the other? If so, does this make AFR possibly much less efficient with newer, multi-pass-shader-heavy titles?

Good point. This is a big issue, and unless they spent a lot of time thinking this out we could be seeing less than encouraging results from the dual-GPU Volari.

However, in current games, with preliminary drivers the results aren't too optimistic so far: below 9800 and 5900 scores. I'm eager to see single-GPU Volari results, but I don't think they will be pretty (remind anyone of the VSA-100? Require several video processing cores to keep up with the competition).

Now, where is that 4 GPU Volari??? 😉
 
Originally posted by: jiffylube1024
Originally posted by: Pete
As Lars' preview at THG indicated, the V8U uses AFR. I believe on disadvantage of AFR over SLI is a slight lag introduced by one GPU possibly waiting on another. Another interesting problem raised at B3D is with multi-pass shaders. Does one GPU finish its passes, stalling the other? If so, does this make AFR possibly much less efficient with newer, multi-pass-shader-heavy titles?

Good point. This is a big issue, and unless they spent a lot of time thinking this out we could be seeing less than encouraging results from the dual-GPU Volari.

However, in current games, with preliminary drivers the results aren't too optimistic so far: below 9800 and 5900 scores. I'm eager to see single-GPU Volari results, but I don't think they will be pretty (remind anyone of the VSA-100? Require several video processing cores to keep up with the competition).

Now, where is that 4 GPU Volari??? 😉

I guess theoretically they could create a 4 GPU Volari, but that thing would have 4 fans and would very easily be longer than your average video card.
 
Originally posted by: AgaBooga


I guess theoretically they could create a 4 GPU Volari, but that thing would have 4 fans and would very easily be longer than your average video card.

Yes it would definately be cumbersome and as Pete brought up - how would it deal with multi-pass shader routines, etc? Now that we're into the age of Pixel Shaders it adds another level of complexity in multi-GPU design.
 
I'm trying to figure out why there are TWO molex power connectors on it instead of one like on ATI and Nvidia cards.
 
Originally posted by: Creig
I'm trying to figure out why there are TWO molex power connectors on it instead of one like on ATI and Nvidia cards.

To provide each GPU with its own, individual, clean power supply.


Confused
 
Why not simply split the power on the board as soon as it comes in? How is having two molex connectors any better?
 
Originally posted by: Jeff7181
It's not two different GPU's... it's just like having two CPU's... they're supposed to share the work, or one does one thing while the other does another thing.

Like an SMP computer system, except this one performs really badly.
 
Back
Top