Lucid Hydra is alive...

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IlllI

Diamond Member
Feb 12, 2002
4,927
11
81
i was about to post this, but i searched first. i guess the news came out earlier than where i saw it :p

 

beginner99

Diamond Member
Jun 2, 2009
5,318
1,763
136
Hm, that will probably make me wait another couple of month before buying a new PC. But it would prefer this on x58. I mean this chip is probably totally worth the cost since you can always buy a "high" mid-range card and just add the next a little better "high" mid-range card once you need it. No need for buying fro the future.
 

Blazer7

Golden Member
Jun 26, 2007
1,136
12
81
During the Lucid Demo presentation it was stated that ATI + nVidia configurations will NOT work. What Lucid promised is close to 100% scaling for SLI or CF.

I wonder if Lucid Hydra is the answer to microstuttering. Any info on this?
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: ItsAlive
This is probably the reasoning behind Nvidia disabling PhysX when ATI cards are present.

I'm wondering how long it will be before Nvidia releases a driver that will prevent their cards from working in SLI on this board like they did with ULi. After all, this is going to cut into their nf200 chip sales.

Of course, it will be done solely to protect their consumers and ensure that their SLI experience is up to Nvidia's standards, which obviously only Nvidia is qualified to provide.

Then again, MSI is an Nvidia partner. Actively disabling the main feature of one of one of your partner's motherboards could strain relations, to say the least.

I'd say Nvidia is in a bit of a pickle here. Allow this motherboard to continue employing its own version of SLI and risk other manufacturers doing the same, cutting into your NF200 sales. Disable SLI on the board and risk turning one of your partners into an ATI exclusive manufacturer.

Decisions, decisions....
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Creig
Originally posted by: ItsAlive
This is probably the reasoning behind Nvidia disabling PhysX when ATI cards are present.

I'm wondering how long it will be before Nvidia releases a driver that will prevent their cards from working in SLI on this board like they did with ULi. After all, this is going to cut into their nf200 chip sales.

Of course, it will be done solely to protect their consumers and ensure that their SLI experience is up to Nvidia's standards, which obviously only Nvidia is qualified to provide.

Then again, MSI is an Nvidia partner. Actively disabling the main feature of one of one of your partner's motherboards could strain relations, to say the least.

I'd say Nvidia is in a bit of a pickle here. Allow this motherboard to continue employing its own version of SLI and risk other manufacturers doing the same, cutting into your NF200 sales. Disable SLI on the board and risk turning one of your partners into an ATI exclusive manufacturer.

Decisions, decisions....

I lol'ed at that quip I bolded...its funny because it is so ridiculously true...

No doubt NV marketing has already whipped up a fancy "certification" process (for a small fee of course) to ensure that mobo partners who integrate Lucid into their boards have all the proper wires connected to ensure peak SLI experience is delivered to their loyal customer base.

If Monster can do it with cables, what's wrong with NV borrowing a page out of their marketing book when it comes to selling the experience as value-add worth paying for? Those fees are funding the current GT400 and GT500 design teams, I'm happy someone is.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: Idontcare
I lol'ed at that quip I bolded...its funny because it is so ridiculously true...

No doubt NV marketing has already whipped up a fancy "certification" process (for a small fee of course) to ensure that mobo partners who integrate Lucid into their boards have all the proper wires connected to ensure peak SLI experience is delivered to their loyal customer base.

If Monster can do it with cables, what's wrong with NV borrowing a page out of their marketing book when it comes to selling the experience as value-add worth paying for? Those fees are funding the current GT400 and GT500 design teams, I'm happy someone is.

Well, the Lucid chip must be using its own method for splitting the video workload and compositing the frames. So it couldn't actually be termed "SLI" since that's a registered trademark of Nvidia. And as MSI/Lucid would be be using their own name and their own design, I don't think Nvidia can really say much unless MSI/Lucid wishes to use the SLI acronym in their advertising. Then I could see a licensing fee. But if they're going to do that, then why bother even using the Lucid Hydra? There had to be a legitimate business reason for:

A) Lucid to develop the Hydra
B) MSI to use the Hydra rather than the NF200, risking Nvidia's displeasure

I can only assume that the Hydra is much less expensive than what Nvidia is charging for the NF200. I doubt MSI would bother with the headache that will inevitably ensue unless it was for a relatively substantial sum.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Creig
Originally posted by: Idontcare
I lol'ed at that quip I bolded...its funny because it is so ridiculously true...

No doubt NV marketing has already whipped up a fancy "certification" process (for a small fee of course) to ensure that mobo partners who integrate Lucid into their boards have all the proper wires connected to ensure peak SLI experience is delivered to their loyal customer base.

If Monster can do it with cables, what's wrong with NV borrowing a page out of their marketing book when it comes to selling the experience as value-add worth paying for? Those fees are funding the current GT400 and GT500 design teams, I'm happy someone is.

Well, the Lucid chip must be using its own method for splitting the video workload and compositing the frames. So it couldn't actually be termed "SLI" since that's a registered trademark of Nvidia. And as MSI/Lucid would be be using their own name and their own design, I don't think Nvidia can really say much unless MSI/Lucid wishes to use the SLI acronym in their advertising. Then I could see a licensing fee. But if they're going to do that, then why bother even using the Lucid Hydra? There had to be a legitimate business reason for:

A) Lucid to develop the Hydra
B) MSI to use the Hydra rather than the NF200, risking Nvidia's displeasure

I can only assume that the Hydra is much less expensive than what Nvidia is charging for the NF200. I doubt MSI would bother with the headache that will inevitably ensue unless it was for a relatively substantial sum.

I fully agree. In fact let's take it a step further and pontificate on the ramifications/implications of Lucid/MSI arguing that they are simply using Nvidia's hardware as a GPGPU and they are sending commands to the GPU's to process for general computing purposes...albeit the subsequent use of the results of those computations is to display an image on the screen for the computer user to view.

Is there really any leeway for NV to invoke a SLI-like licensing model in the case where the third-party claims to simply be using NV's hardware for GPGPU tasks?

What if TMPGEnc wants to enable their transcoders to take advantage of multiple NV GPU cards to simply double-up on the raw GPGPU computing power...would NV prevent them from doing so on a non SLI-certified mobo? What does SLI certification have to do with being able to extract GPGPU performance from multiple slots of GT200b (or GT300) goodness?

I don't see any argument to made against TMPGEnc in this use of GPGPU, and by extension I don't see how NV could make an argument against Lucid writing their own application (call it a video driver parser if you like) to take advantage of using multiple NV GPGPU's to perform calculations (be it for transcoding or graphics rendering).
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: Idontcare
I fully agree. In fact let's take it a step further and pontificate on the ramifications/implications of Lucid/MSI arguing that they are simply using Nvidia's hardware as a GPGPU and they are sending commands to the GPU's to process for general computing purposes...albeit the subsequent use of the results of those computations is to display an image on the screen for the computer user to view.

Is there really any leeway for NV to invoke a SLI-like licensing model in the case where the third-party claims to simply be using NV's hardware for GPGPU tasks?

What if TMPGEnc wants to enable their transcoders to take advantage of multiple NV GPU cards to simply double-up on the raw GPGPU computing power...would NV prevent them from doing so on a non SLI-certified mobo? What does SLI certification have to do with being able to extract GPGPU performance from multiple slots of GT200b (or GT300) goodness?

I don't see any argument to made against TMPGEnc in this use of GPGPU, and by extension I don't see how NV could make an argument against Lucid writing their own application (call it a video driver parser if you like) to take advantage of using multiple NV GPGPU's to perform calculations (be it for transcoding or graphics rendering).

Your scenario with TMPGEnc, however, does not cut into Nvidia's NF200 sales. The Lucid Hydra does.

I can still envision Nvidia putting forth a future driver revision that deliberately "breaks" multi-GPU video compositing functionality on Nvidia cards installed in any motherboard not utilizing an NF200 chip. All they'll have to claim is that, "We don't know why your Lucid Hydra equipped system is no longer functioning properly. But it's not our issue. Talk to MSI/Lucid".

Call me cynical, but Nvidia has done exactly this sort of thing in the past. There's no reason not to expect them to do it again in the future.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Creig
Originally posted by: Idontcare
I fully agree. In fact let's take it a step further and pontificate on the ramifications/implications of Lucid/MSI arguing that they are simply using Nvidia's hardware as a GPGPU and they are sending commands to the GPU's to process for general computing purposes...albeit the subsequent use of the results of those computations is to display an image on the screen for the computer user to view.

Is there really any leeway for NV to invoke a SLI-like licensing model in the case where the third-party claims to simply be using NV's hardware for GPGPU tasks?

What if TMPGEnc wants to enable their transcoders to take advantage of multiple NV GPU cards to simply double-up on the raw GPGPU computing power...would NV prevent them from doing so on a non SLI-certified mobo? What does SLI certification have to do with being able to extract GPGPU performance from multiple slots of GT200b (or GT300) goodness?

I don't see any argument to made against TMPGEnc in this use of GPGPU, and by extension I don't see how NV could make an argument against Lucid writing their own application (call it a video driver parser if you like) to take advantage of using multiple NV GPGPU's to perform calculations (be it for transcoding or graphics rendering).

Your scenario with TMPGEnc, however, does not cut into Nvidia's NF200 sales. The Lucid Hydra does.

I can still envision Nvidia putting forth a future driver revision that deliberately "breaks" multi-GPU video compositing functionality on Nvidia cards installed in any motherboard not utilizing an NF200 chip. All they'll have to claim is that, "We don't know why your Lucid Hydra equipped system is no longer functioning properly. But it's not our issue. Talk to MSI/Lucid".

Call me cynical, but Nvidia has done exactly this sort of thing in the past. There's no reason not to expect them to do it again in the future.

Absolutely. I'm not saying NV can't or won't do something to minimize shareholder equity erosion; I'm just saying I can see ways in which Lucid could posit their position as simply being a third-party GPGPU advocate/user whose innovation ends up getting stifled by NV's desire to restrict access to what they claim to want to be a market standard. (hypothetically speaking)

It would not play well to NV to send that message in an environment where Larrabee would appear to be much easier to "port" applications to with its intrinsic x86 compliance. NV needs to gain critical mass of applications (much as Itanium needed) written for CUDA or they are going to risk see their GPGPU ISA go the way of MIPS and PA-RISC.

If NV "moved to block" a third-party attempting to take advantage of their GPGPU just imagine the field-day that would enable Intel and AMD marketing to have at NV's expense in meetings with software developers looking to expand their offerings to include GPGPU compatibility. NV is going to have to tread lightly here IMO to navigate the knock-on-effects their decisions might have some 2-3 yrs down the road. Giving your competitor's marketing dept ammo to shoot you in the back by way of making FUD as extrapolation of your established actions is to be avoided.
 

thilanliyan

Lifer
Jun 21, 2005
12,060
2,273
126
Originally posted by: Creig
Originally posted by: ItsAlive
This is probably the reasoning behind Nvidia disabling PhysX when ATI cards are present.

I'm wondering how long it will be before Nvidia releases a driver that will prevent their cards from working in SLI on this board like they did with ULi. After all, this is going to cut into their nf200 chip sales.

nVidia has licensed SLI to Intel for LGA1156 so there's no need for the nf200 chip anyway, much like X58. Very few X58 boards had the nf200 chip. Their desktop chipset business seems to be dying.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: thilan29
Originally posted by: Creig
Originally posted by: ItsAlive
This is probably the reasoning behind Nvidia disabling PhysX when ATI cards are present.

I'm wondering how long it will be before Nvidia releases a driver that will prevent their cards from working in SLI on this board like they did with ULi. After all, this is going to cut into their nf200 chip sales.

nVidia has licensed SLI to Intel for LGA1156 so there's no need for the nf200 chip anyway, much like X58. Very few X58 boards had the nf200 chip. Their desktop chipset business seems to be dying.

But how is that licensing fee collected? Based on the wording of the Nvidia Core I5 and I7 SLI Licensing press release, I'm going to guess that the fee is paid by the individual board manufacturers for each SLI board to be certified:


FOR IMMEDIATE RELEASE:

SANTA CLARA, CA?AUGUST 10, 2009?NVIDIA Corporation today announced that Intel Corporation, and the world?s other leading motherboard manufacturers, including ASUS, EVGA, Gigabyte, and MSI, have all licensed NVIDIA® SLI® technology for inclusion on their Intel® P55 Express Chipset-based motherboards designed for the upcoming Intel® Core? i7 and i5 processor in the LGA1156 socket. As a result, customers who purchase a validated P55-based motherboard and Core i7 or Core i5 processor when available can equip their PCs with any combination of NVIDIA® GeForce® GPUs, including Quad SLI, for the ultimate visual computing experience.
http://www.nvidia.com/object/io_1249876351744.html


If this is the case, then the money that would otherwise have gone to Nvidia for SLI licensing will be given to Lucid for their Hydra chip instead.
 

yh125d

Diamond Member
Dec 23, 2006
6,886
0
76
Originally posted by: thilan29
Originally posted by: Creig
Originally posted by: ItsAlive
This is probably the reasoning behind Nvidia disabling PhysX when ATI cards are present.

I'm wondering how long it will be before Nvidia releases a driver that will prevent their cards from working in SLI on this board like they did with ULi. After all, this is going to cut into their nf200 chip sales.

nVidia has licensed SLI to Intel for LGA1156 so there's no need for the nf200 chip anyway, much like X58. Very few X58 boards had the nf200 chip. Their desktop chipset business seems to be dying.

You'll still see NF200 on P55, due to the lack of available lanes. The CPU will have 16 lanes and 4 on the P55. There should be a handful of models with it to enable 16/16 and 16/16/8
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: yh125d
Originally posted by: thilan29
Originally posted by: Creig
Originally posted by: ItsAlive
This is probably the reasoning behind Nvidia disabling PhysX when ATI cards are present.

I'm wondering how long it will be before Nvidia releases a driver that will prevent their cards from working in SLI on this board like they did with ULi. After all, this is going to cut into their nf200 chip sales.

nVidia has licensed SLI to Intel for LGA1156 so there's no need for the nf200 chip anyway, much like X58. Very few X58 boards had the nf200 chip. Their desktop chipset business seems to be dying.

You'll still see NF200 on P55, due to the lack of available lanes. The CPU will have 16 lanes and 4 on the P55. There should be a handful of models with it to enable 16/16 and 16/16/8

???? The X58 natively supports up to 32 PCI-E 2.0 lanes while the IOH provides an additional 4 lanes.

www.intel.com/Assets/PDF/prodb.../x58-product-brief.pdf

And those lanes can be configured from dual 16x to quad 8x. The only reason you would need more than 36 is if you want tri or quad SLI at 16x each instead of 8x each. The performance hit going from 16x PCI-E 2.0 to 8x is minor.

http://www.tomshardware.com/re...xpress-2.0,1915-9.html
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Originally posted by: Creig

The performance hit going from 16x PCI-E 2.0 to 8x is minor.

http://www.tomshardware.com/re...xpress-2.0,1915-9.html

the data in your link only represents 8x vs 16x in a single card config using a little 3850. it's just not enough to put the PCIe bus to the test, and I think single-card performance is irrelevant to the bandwidth discussion because both P55 and X58 have more than enough lanes to run one GPU at full speed. The question involves more than one card.

http://www.tweaktown.com/artic...performance/index.html

This is data of two gigabyte boards, a P45 and X48 each with a pair of 4850s, and with the half-speed slots on the P45, the difference is quite measurable particularly at 2560. I'm not saying the difference is big, but it is not negligible either, and as GPUs get faster, you will take a greater loss, and I'll be interested to see that data in a few weeks as a P55 vs X58 article is imminent. We will be able to see how both SLI and Crossfire suffer with 8 lanes, presumably with something a bit faster than a 4850.

I'm just saying, if you spend $200 on the CPU and $200 on the GPU, you may as well spend the extra $50 on the board to put you in range of X58, though you can bet P55 parts are going to be marked up considerably for the first few weeks of their lives.
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: Creig
Originally posted by: yh125d
Originally posted by: thilan29
Originally posted by: Creig
Originally posted by: ItsAlive
This is probably the reasoning behind Nvidia disabling PhysX when ATI cards are present.

I'm wondering how long it will be before Nvidia releases a driver that will prevent their cards from working in SLI on this board like they did with ULi. After all, this is going to cut into their nf200 chip sales.

nVidia has licensed SLI to Intel for LGA1156 so there's no need for the nf200 chip anyway, much like X58. Very few X58 boards had the nf200 chip. Their desktop chipset business seems to be dying.

You'll still see NF200 on P55, due to the lack of available lanes. The CPU will have 16 lanes and 4 on the P55. There should be a handful of models with it to enable 16/16 and 16/16/8

???? The X58 natively supports up to 32 PCI-E 2.0 lanes while the IOH provides an additional 4 lanes.

www.intel.com/Assets/PDF/prodb.../x58-product-brief.pdf

And those lanes can be configured from dual 16x to quad 8x. The only reason you would need more than 36 is if you want tri or quad SLI at 16x each instead of 8x each. The performance hit going from 16x PCI-E 2.0 to 8x is minor.

http://www.tomshardware.com/re...xpress-2.0,1915-9.html

Re-read his quote, he's referring to the P55, not X58. P55 only has 16 lanes on the on-die controller, IIRC.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: aka1nas
Re-read his quote, he's referring to the P55, not X58. P55 only has 16 lanes on the on-die controller, IIRC.

Ah, gotcha. Sorry, too many X5X series out there. I'm starting to think one and type another. Yes, the P55 only has 16 lanes.

Hmmm... Perhaps since the Hydra sends individual objects to each video card to be rendered rather than both cards receiving identical data for patterned workload splitting, the overall bandwidth requirements will be lower.

Besides, the NF200 doesn't really add PCI-E lanes, it expands them, doesn't it? ie - takes an 8x lane and turns it into a 16x lane? I wonder if the Hydra performs a similar function.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
I'm totally with BFG on this one. I would be surprised if Lucid didn't face a fight with NV. We'll see...

The other question I have about Hydra is does it have to support what Direct3D version that I want to use in the hardware? i.e. If I have a DX10 running on Hydra and DX11 cards came out, would I also need to get a new DX11 Hydra chip or is that just a driver upgrade?
 

yh125d

Diamond Member
Dec 23, 2006
6,886
0
76
Originally posted by: Creig
Originally posted by: yh125d
Originally posted by: thilan29
nVidia has licensed SLI to Intel for LGA1156 so there's no need for the nf200 chip anyway, much like X58. Very few X58 boards had the nf200 chip. Their desktop chipset business seems to be dying.

You'll still see NF200 on P55, due to the lack of available lanes. The CPU will have 16 lanes and 4 on the P55. There should be a handful of models with it to enable 16/16 and 16/16/8

???? The X58 natively supports up to 32 PCI-E 2.0 lanes while the IOH provides an additional 4 lanes.

www.intel.com/Assets/PDF/prodb.../x58-product-brief.pdf

And those lanes can be configured from dual 16x to quad 8x. The only reason you would need more than 36 is if you want tri or quad SLI at 16x each instead of 8x each. The performance hit going from 16x PCI-E 2.0 to 8x is minor.

http://www.tomshardware.com/re...xpress-2.0,1915-9.html

X58 without a bridge chip allows 16/16, and 16/8/8. There are a handful with the bridge for those who want 16/16/8 or 16/16/16

P55/1156, which I referred to, will likely have models with NF200. More models than there are X58/NF200 likely. The P55 with 4 lanes and 16 lanes through the CPU only allows 16x and 8/8. There will probably be mamny boards with a bridge chip (be it hydra or NF200) to enable 16/8/8 and 16/16

Ah, gotcha. Sorry, too many X5X series out there. I'm starting to think one and type another. Yes, the P55 only has 16 lanes.

Theres only two, and one isn't even released yet :

Whereas we have P43, P45, G41, G43, G45, Q45, X48...
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: yh125d
Theres only two, and one isn't even released yet :

Whereas we have P43, P45, G41, G43, G45, Q45, X48...

Sorry. Like I said, I was thinking one and started typing about another. After awhile, all the various chipset designations and specifications start to blur together.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: alyarb
presumably far lower. no AFR. no buffer sharing.

I never really thought about it before, but the Hydra could be the answer to a unified memory pool. With it, you are no longer storing identical data in both cards. So two 512MB video cards actually would behave like a single 1GB X2 card on a Hydra based motherboard. Perhaps even better if their near 100% efficiency claims are true and overall bandwidth consumption is reduced.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
if the GPUs are working on totally separate jobs, it does not make sense that they should share much, or any data. anything that is shared will have to be sent out of the 150+ GB/s GDDR5 bus over to a tiny 8GB/s PEG link. it's better to totally segregate the GPUs based on their available compute/memory performance and then composite the results at the very end.
 

imported_Shaq

Senior member
Sep 24, 2004
731
0
0
Originally posted by: Creig
Originally posted by: alyarb
presumably far lower. no AFR. no buffer sharing.

I never really thought about it before, but the Hydra could be the answer to a unified memory pool. With it, you are no longer storing identical data in both cards. So two 512MB video cards actually would behave like a single 1GB X2 card on a Hydra based motherboard. Perhaps even better if their near 100% efficiency claims are true and overall bandwidth consumption is reduced.

I dunno. This is sounding too good to be true. lol How much is this chip going to cost anyway? It sounds like a $50+ chip if it will do all these things- at least at launch.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: Shaq
Originally posted by: Creig
Originally posted by: alyarb
presumably far lower. no AFR. no buffer sharing.

I never really thought about it before, but the Hydra could be the answer to a unified memory pool. With it, you are no longer storing identical data in both cards. So two 512MB video cards actually would behave like a single 1GB X2 card on a Hydra based motherboard. Perhaps even better if their near 100% efficiency claims are true and overall bandwidth consumption is reduced.

I dunno. This is sounding too good to be true. lol How much is this chip going to cost anyway? It sounds like a $50+ chip if it will do all these things- at least at launch.

It does sort of sound like the Holy Grail for video, doesn't it? I agree it seems too good to be true. And I imagine they'll have driver issues to work through in the beginning. But at least it's something new and interesting to discuss. :)