During the Lucid Demo presentation it was stated that ATI + nVidia configurations will NOT work. What Lucid promised is close to 100% scaling for SLI or CF.
Originally posted by: ItsAlive
This is probably the reasoning behind Nvidia disabling PhysX when ATI cards are present.
Originally posted by: Creig
Originally posted by: ItsAlive
This is probably the reasoning behind Nvidia disabling PhysX when ATI cards are present.
I'm wondering how long it will be before Nvidia releases a driver that will prevent their cards from working in SLI on this board like they did with ULi. After all, this is going to cut into their nf200 chip sales.
Of course, it will be done solely to protect their consumers and ensure that their SLI experience is up to Nvidia's standards, which obviously only Nvidia is qualified to provide.
Then again, MSI is an Nvidia partner. Actively disabling the main feature of one of one of your partner's motherboards could strain relations, to say the least.
I'd say Nvidia is in a bit of a pickle here. Allow this motherboard to continue employing its own version of SLI and risk other manufacturers doing the same, cutting into your NF200 sales. Disable SLI on the board and risk turning one of your partners into an ATI exclusive manufacturer.
Decisions, decisions....
Originally posted by: Idontcare
I lol'ed at that quip I bolded...its funny because it is so ridiculously true...
No doubt NV marketing has already whipped up a fancy "certification" process (for a small fee of course) to ensure that mobo partners who integrate Lucid into their boards have all the proper wires connected to ensure peak SLI experience is delivered to their loyal customer base.
If Monster can do it with cables, what's wrong with NV borrowing a page out of their marketing book when it comes to selling the experience as value-add worth paying for? Those fees are funding the current GT400 and GT500 design teams, I'm happy someone is.
Originally posted by: Creig
Originally posted by: Idontcare
I lol'ed at that quip I bolded...its funny because it is so ridiculously true...
No doubt NV marketing has already whipped up a fancy "certification" process (for a small fee of course) to ensure that mobo partners who integrate Lucid into their boards have all the proper wires connected to ensure peak SLI experience is delivered to their loyal customer base.
If Monster can do it with cables, what's wrong with NV borrowing a page out of their marketing book when it comes to selling the experience as value-add worth paying for? Those fees are funding the current GT400 and GT500 design teams, I'm happy someone is.
Well, the Lucid chip must be using its own method for splitting the video workload and compositing the frames. So it couldn't actually be termed "SLI" since that's a registered trademark of Nvidia. And as MSI/Lucid would be be using their own name and their own design, I don't think Nvidia can really say much unless MSI/Lucid wishes to use the SLI acronym in their advertising. Then I could see a licensing fee. But if they're going to do that, then why bother even using the Lucid Hydra? There had to be a legitimate business reason for:
A) Lucid to develop the Hydra
B) MSI to use the Hydra rather than the NF200, risking Nvidia's displeasure
I can only assume that the Hydra is much less expensive than what Nvidia is charging for the NF200. I doubt MSI would bother with the headache that will inevitably ensue unless it was for a relatively substantial sum.
Originally posted by: Idontcare
I fully agree. In fact let's take it a step further and pontificate on the ramifications/implications of Lucid/MSI arguing that they are simply using Nvidia's hardware as a GPGPU and they are sending commands to the GPU's to process for general computing purposes...albeit the subsequent use of the results of those computations is to display an image on the screen for the computer user to view.
Is there really any leeway for NV to invoke a SLI-like licensing model in the case where the third-party claims to simply be using NV's hardware for GPGPU tasks?
What if TMPGEnc wants to enable their transcoders to take advantage of multiple NV GPU cards to simply double-up on the raw GPGPU computing power...would NV prevent them from doing so on a non SLI-certified mobo? What does SLI certification have to do with being able to extract GPGPU performance from multiple slots of GT200b (or GT300) goodness?
I don't see any argument to made against TMPGEnc in this use of GPGPU, and by extension I don't see how NV could make an argument against Lucid writing their own application (call it a video driver parser if you like) to take advantage of using multiple NV GPGPU's to perform calculations (be it for transcoding or graphics rendering).
Originally posted by: Creig
Originally posted by: Idontcare
I fully agree. In fact let's take it a step further and pontificate on the ramifications/implications of Lucid/MSI arguing that they are simply using Nvidia's hardware as a GPGPU and they are sending commands to the GPU's to process for general computing purposes...albeit the subsequent use of the results of those computations is to display an image on the screen for the computer user to view.
Is there really any leeway for NV to invoke a SLI-like licensing model in the case where the third-party claims to simply be using NV's hardware for GPGPU tasks?
What if TMPGEnc wants to enable their transcoders to take advantage of multiple NV GPU cards to simply double-up on the raw GPGPU computing power...would NV prevent them from doing so on a non SLI-certified mobo? What does SLI certification have to do with being able to extract GPGPU performance from multiple slots of GT200b (or GT300) goodness?
I don't see any argument to made against TMPGEnc in this use of GPGPU, and by extension I don't see how NV could make an argument against Lucid writing their own application (call it a video driver parser if you like) to take advantage of using multiple NV GPGPU's to perform calculations (be it for transcoding or graphics rendering).
Your scenario with TMPGEnc, however, does not cut into Nvidia's NF200 sales. The Lucid Hydra does.
I can still envision Nvidia putting forth a future driver revision that deliberately "breaks" multi-GPU video compositing functionality on Nvidia cards installed in any motherboard not utilizing an NF200 chip. All they'll have to claim is that, "We don't know why your Lucid Hydra equipped system is no longer functioning properly. But it's not our issue. Talk to MSI/Lucid".
Call me cynical, but Nvidia has done exactly this sort of thing in the past. There's no reason not to expect them to do it again in the future.
Originally posted by: Creig
Originally posted by: ItsAlive
This is probably the reasoning behind Nvidia disabling PhysX when ATI cards are present.
I'm wondering how long it will be before Nvidia releases a driver that will prevent their cards from working in SLI on this board like they did with ULi. After all, this is going to cut into their nf200 chip sales.
Originally posted by: thilan29
Originally posted by: Creig
Originally posted by: ItsAlive
This is probably the reasoning behind Nvidia disabling PhysX when ATI cards are present.
I'm wondering how long it will be before Nvidia releases a driver that will prevent their cards from working in SLI on this board like they did with ULi. After all, this is going to cut into their nf200 chip sales.
nVidia has licensed SLI to Intel for LGA1156 so there's no need for the nf200 chip anyway, much like X58. Very few X58 boards had the nf200 chip. Their desktop chipset business seems to be dying.
http://www.nvidia.com/object/io_1249876351744.htmlFOR IMMEDIATE RELEASE:
SANTA CLARA, CA?AUGUST 10, 2009?NVIDIA Corporation today announced that Intel Corporation, and the world?s other leading motherboard manufacturers, including ASUS, EVGA, Gigabyte, and MSI, have all licensed NVIDIA® SLI® technology for inclusion on their Intel® P55 Express Chipset-based motherboards designed for the upcoming Intel® Core? i7 and i5 processor in the LGA1156 socket. As a result, customers who purchase a validated P55-based motherboard and Core i7 or Core i5 processor when available can equip their PCs with any combination of NVIDIA® GeForce® GPUs, including Quad SLI, for the ultimate visual computing experience.
Originally posted by: thilan29
Originally posted by: Creig
Originally posted by: ItsAlive
This is probably the reasoning behind Nvidia disabling PhysX when ATI cards are present.
I'm wondering how long it will be before Nvidia releases a driver that will prevent their cards from working in SLI on this board like they did with ULi. After all, this is going to cut into their nf200 chip sales.
nVidia has licensed SLI to Intel for LGA1156 so there's no need for the nf200 chip anyway, much like X58. Very few X58 boards had the nf200 chip. Their desktop chipset business seems to be dying.
Originally posted by: yh125d
Originally posted by: thilan29
Originally posted by: Creig
Originally posted by: ItsAlive
This is probably the reasoning behind Nvidia disabling PhysX when ATI cards are present.
I'm wondering how long it will be before Nvidia releases a driver that will prevent their cards from working in SLI on this board like they did with ULi. After all, this is going to cut into their nf200 chip sales.
nVidia has licensed SLI to Intel for LGA1156 so there's no need for the nf200 chip anyway, much like X58. Very few X58 boards had the nf200 chip. Their desktop chipset business seems to be dying.
You'll still see NF200 on P55, due to the lack of available lanes. The CPU will have 16 lanes and 4 on the P55. There should be a handful of models with it to enable 16/16 and 16/16/8
Originally posted by: Creig
The performance hit going from 16x PCI-E 2.0 to 8x is minor.
http://www.tomshardware.com/re...xpress-2.0,1915-9.html
Originally posted by: Creig
Originally posted by: yh125d
Originally posted by: thilan29
Originally posted by: Creig
Originally posted by: ItsAlive
This is probably the reasoning behind Nvidia disabling PhysX when ATI cards are present.
I'm wondering how long it will be before Nvidia releases a driver that will prevent their cards from working in SLI on this board like they did with ULi. After all, this is going to cut into their nf200 chip sales.
nVidia has licensed SLI to Intel for LGA1156 so there's no need for the nf200 chip anyway, much like X58. Very few X58 boards had the nf200 chip. Their desktop chipset business seems to be dying.
You'll still see NF200 on P55, due to the lack of available lanes. The CPU will have 16 lanes and 4 on the P55. There should be a handful of models with it to enable 16/16 and 16/16/8
???? The X58 natively supports up to 32 PCI-E 2.0 lanes while the IOH provides an additional 4 lanes.
www.intel.com/Assets/PDF/prodb.../x58-product-brief.pdf
And those lanes can be configured from dual 16x to quad 8x. The only reason you would need more than 36 is if you want tri or quad SLI at 16x each instead of 8x each. The performance hit going from 16x PCI-E 2.0 to 8x is minor.
http://www.tomshardware.com/re...xpress-2.0,1915-9.html
Originally posted by: aka1nas
Re-read his quote, he's referring to the P55, not X58. P55 only has 16 lanes on the on-die controller, IIRC.
From a purely hardware perspective, the HYDRA chip takes in a single PCIe x16 connection and outputs two full PCIe 2.0 x16 connections. Depending on the partner's implementation method, that could connect to two GPUs or split into four x8 PCIe 2.0 connections for four GPUs.
Originally posted by: Creig
Originally posted by: yh125d
Originally posted by: thilan29
nVidia has licensed SLI to Intel for LGA1156 so there's no need for the nf200 chip anyway, much like X58. Very few X58 boards had the nf200 chip. Their desktop chipset business seems to be dying.
You'll still see NF200 on P55, due to the lack of available lanes. The CPU will have 16 lanes and 4 on the P55. There should be a handful of models with it to enable 16/16 and 16/16/8
???? The X58 natively supports up to 32 PCI-E 2.0 lanes while the IOH provides an additional 4 lanes.
www.intel.com/Assets/PDF/prodb.../x58-product-brief.pdf
And those lanes can be configured from dual 16x to quad 8x. The only reason you would need more than 36 is if you want tri or quad SLI at 16x each instead of 8x each. The performance hit going from 16x PCI-E 2.0 to 8x is minor.
http://www.tomshardware.com/re...xpress-2.0,1915-9.html
Ah, gotcha. Sorry, too many X5X series out there. I'm starting to think one and type another. Yes, the P55 only has 16 lanes.
Originally posted by: yh125d
Theres only two, and one isn't even released yet :
Whereas we have P43, P45, G41, G43, G45, Q45, X48...
Originally posted by: alyarb
presumably far lower. no AFR. no buffer sharing.
Originally posted by: Creig
Originally posted by: alyarb
presumably far lower. no AFR. no buffer sharing.
I never really thought about it before, but the Hydra could be the answer to a unified memory pool. With it, you are no longer storing identical data in both cards. So two 512MB video cards actually would behave like a single 1GB X2 card on a Hydra based motherboard. Perhaps even better if their near 100% efficiency claims are true and overall bandwidth consumption is reduced.
Originally posted by: Shaq
Originally posted by: Creig
Originally posted by: alyarb
presumably far lower. no AFR. no buffer sharing.
I never really thought about it before, but the Hydra could be the answer to a unified memory pool. With it, you are no longer storing identical data in both cards. So two 512MB video cards actually would behave like a single 1GB X2 card on a Hydra based motherboard. Perhaps even better if their near 100% efficiency claims are true and overall bandwidth consumption is reduced.
I dunno. This is sounding too good to be true. lol How much is this chip going to cost anyway? It sounds like a $50+ chip if it will do all these things- at least at launch.