• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Dual GT300 card in works

Hauk

Platinum Member
Tasty Fud for your dining enjoyment! 🙂


"Since ATI is planning a dual ?RV870? card, something that we know as Gemini dual-chip card, you won't really be surprised to learn that Nvidia is also working hard on its dual GT300 card.

We?ve confirmed this but we are not sure if Nvidia can launch this card in November or it will also show the demo in late Q4 2009 and start shipping it in Q1 2010, probably in January.

Nvidia?s card is DirectX 11 compatible and its built to run parallel processing CUDA, DirectX compute or OpenCL. This is what Nvidia values apart of being the fastest graphics boy on the block.

The goal is to have something faster than dual RV870 card and to try to keep the Geforce GTX295 performance crown. Nvidia hints it will be the fastest and most powerful thing ever."
source


 
Those cards are going to be insanely fast. I can't wait. lol And yes these cards can dominate Crysis. I don't see any games coming that can even remotely use all that power. We will be set for 2-3 years with one of these.
 
Originally posted by: Shaq
Those cards are going to be insanely fast. I can't wait. lol And yes these cards can dominate Crysis. I don't see any games coming that can even remotely use all that power. We will be set for 2-3 years with one of these.

Unless once and for all this proves that it hasnt been the hardware, it has been the engine.


Anyway, that card is going to be a freaking beast. Something tells me nV is going to suprise people and launch the single GPUs in November. They are being strangely quiet. 😛

ARMA 2 is chewing up my 2X280s, so I need more power!
 
Originally posted by: OCguy
Originally posted by: Shaq
Those cards are going to be insanely fast. I can't wait. lol And yes these cards can dominate Crysis. I don't see any games coming that can even remotely use all that power. We will be set for 2-3 years with one of these.

Unless once and for all this proves that it hasnt been the hardware, it has been the engine.


Anyway, that card is going to be a freaking beast. Something tells me nV is going to suprise people and launch the single GPUs in November. They are being strangely quiet. 😛

ARMA 2 is chewing up my 2X280s, so I need more power!

I hope this rumor is true. Mo powuh is always good.

Hopefully Nvidia will price it in the $500-600 range and not go hog wild on us.
 
Originally posted by: OCguy
Originally posted by: Shaq
Those cards are going to be insanely fast. I can't wait. lol And yes these cards can dominate Crysis. I don't see any games coming that can even remotely use all that power. We will be set for 2-3 years with one of these.

Unless once and for all this proves that it hasnt been the hardware, it has been the engine.


Anyway, that card is going to be a freaking beast. Something tells me nV is going to suprise people and launch the single GPUs in November. They are being strangely quiet. 😛

ARMA 2 is chewing up my 2X280s, so I need more power!

I wouldn't say Crysis's engine is poorly coded, although I'd agree with you that the engine definitely was the biggest flaw in it being a hardware killer. I think it's more that it was coded using obsolete methods that had been working well up until then, but was bound to hit a wall sooner or later. It's like Intel's old P4 architecture, decently powerful when first introduced due to high clock speeds, but soon hit that power/thermal wall and forced Intel to innovate, giving us the Conroe.

Crysis's texture&lightning design, despite being DX10, was still very much rooted in old school thought processes, AKA bigger textures, more memory devoted to shadows. The engine had nothing revolutionary about the way it processed and rendered things, had little in the way of improving efficiency. Ironically DX10 was envisioned to be built to improve PERFORMANCE over DX9, not so much ENHANCEMENT in the actual graphics department. Crysis tried to create a render engine that was most likely beyond the scope of DX10, and ended up with a behemoth that chewed up power like Intel's netburst, with comparatively low returns in real world performance.
 
Even a blind squirrel gets a nut every once in awhile. It's pretty easy to predict a dual card in the future, as they are pretty common now.
 
Originally posted by: OCguy
Originally posted by: Shaq
Those cards are going to be insanely fast. I can't wait. lol And yes these cards can dominate Crysis. I don't see any games coming that can even remotely use all that power. We will be set for 2-3 years with one of these.

Unless once and for all this proves that it hasnt been the hardware, it has been the engine.


Anyway, that card is going to be a freaking beast. Something tells me nV is going to suprise people and launch the single GPUs in November. They are being strangely quiet. 😛

ARMA 2 is chewing up my 2X280s, so I need more power!

ARMA 2 seems less efficient than Crysis in its current state. I hear a lot of that is due to disk thrashing and SSD's eliminate the low framerates. If we could get a dual GT300 for $600 that would be cheap considering what we have now. I think they may be bottlenecked by the PCI 2.0 slots though. Can you imagine quad-SLI with these things? LOL
 
Originally posted by: Shaq
I think they may be bottlenecked by the PCI 2.0 slots though.

I highly doubt that.

I don't think even PCIe 2.0 is really necessary yet over PCIe 1.1. As long as a graphics solution can operate with data that is stored within its local video frame buffer memory, a videocard should operate close to its maximum performance, even if the PCI Express link width is limited to x8. I haven't seen any game push > 1GBs of memory usage and new cards should have more than that onboard I would imagine.

Also, if this was the case, the P55 chipset would be a complete failure since it will only do 8x/8x.

You may have seen very poor performance of P35 chipset (i.e. PCIe 1.0/1.1) in crossfire. That's only because it operates at 16x/4x which severely limits performance of the 2nd card. But say 975 chipset with PCIe 1.1s will probably be very close to PCIe 2.0 performance in CF. I haven't been able to find benches that prove otherwise. But it would be interesting to see.
 
Originally posted by: OCguy
ARMA 2 is chewing up my 2X280s, so I need more power!

AFAIK Arma 2 is not scaling very well with multi GPU's currently, probably with newer driver releases it should work, it has the exact issue with scaling that Crysis did, odd.

Rumours stated that the G300 is gonna be huge and a power sucker, I cannot imagine how much power would suck two of those GPU's, it may have two 8 Pin power connectors hehe. But is quite unusuall for nVidia to rushly make such dual GPU solutions at launch, seems that nVidia is fearing ATi.

"The goal is to have something faster than dual RV870 card and to try to keep the Geforce GTX295 performance crown. Nvidia hints it will be the fastest and most powerful thing ever."

That hint also shows that their single G300 isn't faster than the GTX 295, and rumours also stated that a single RV870 is faster than the HD4870X2, it would mean that this graphic war is gonna be closer than it is now.
 
Originally posted by: RussianSensation
As long as a graphics solution can operate with data that is stored within its local video frame buffer memory, a videocard should operate close to its maximum performance, even if the PCI Express link width is limited to x8.
This is correct. You want the least traffic through PCIe while gaming. PCIe comes into play when GPU has to wait for data. This can be caused by several factors, such as lack of on-board memory or bad drivers/coding, or sub-optimal motherboard designs, etc..

Higher PCIe bandwidth will improve performance under such circumstances, but the improvement will not provide an ideal gameplay anyway.
 
Originally posted by: evolucion8
Originally posted by: OCguy
ARMA 2 is chewing up my 2X280s, so I need more power!

AFAIK Arma 2 is not scaling very well with multi GPU's currently, probably with newer driver releases it should work, it has the exact issue with scaling that Crysis did, odd.

Rumours stated that the G300 is gonna be huge and a power sucker, I cannot imagine how much power would suck two of those GPU's, it may have two 8 Pin power connectors hehe. But is quite unusuall for nVidia to rushly make such dual GPU solutions at launch, seems that nVidia is fearing ATi.

"The goal is to have something faster than dual RV870 card and to try to keep the Geforce GTX295 performance crown. Nvidia hints it will be the fastest and most powerful thing ever."

That hint also shows that their single G300 isn't faster than the GTX 295, and rumours also stated that a single RV870 is faster than the HD4870X2, it would mean that this graphic war is gonna be closer than it is now.

I dont think anyone is fearing anyone else. Why wouldnt they want to keep the performance crown? I dont think that releasing an x2 card is out of any sort of fear. Both companies do this for a very niche market.

I wouldnt judge the next round on how one GPU compares to 2X GPUs of the previous generation. When the 4870 and 280 launched, they both were smoked by the 9800GX2. Driver improvements later narrowed this gap, but at launch, you were better off with the previous gen's X2 card.

As far as being power hungry, i'm sure they will still have an idle that is as good or better than ATi due to thier auto 2D downclocking. Die size means nothing to your average consumer, only to online forums where people try and look for any sort of negative about the "other guy's" product.

 
Originally posted by: evolucion8
Originally posted by: OCguy
ARMA 2 is chewing up my 2X280s, so I need more power!

AFAIK Arma 2 is not scaling very well with multi GPU's currently, probably with newer driver releases it should work, it has the exact issue with scaling that Crysis did, odd.

Rumours stated that the G300 is gonna be huge and a power sucker, I cannot imagine how much power would suck two of those GPU's, it may have two 8 Pin power connectors hehe. But is quite unusuall for nVidia to rushly make such dual GPU solutions at launch, seems that nVidia is fearing ATi.

"The goal is to have something faster than dual RV870 card and to try to keep the Geforce GTX295 performance crown. Nvidia hints it will be the fastest and most powerful thing ever."

That hint also shows that their single G300 isn't faster than the GTX 295, and rumours also stated that a single RV870 is faster than the HD4870X2, it would mean that this graphic war is gonna be closer than it is now.

I think that Nvidia knows that AMD's next gen cards, and quite possibly the next AMD x2 card will be out before the GT300, so they will almst surely lose the performance crown, then try to win it back with the dual GT300 card... not neccessarily that the GT300 won't be faster than the GTX295, just that likely they'll lose it to AMD before the launch of their x2 card, at least that's my guess.

I also think that Nvidia is being quiet because their cards are further away from launch then they'd like to admit, that's just my guess. That doesn't mean that they won't have something special at launch, but I take their being quiet more as an indication of slow moving on getting the hardware out. Last few times AMD was really quiet before a launch I think we got the 2900XT and Phenom. 🙂
 
Originally posted by: OCguy
Die size means nothing to your average consumer, only to online forums where people try and look for any sort of negative about the "other guy's" product.

Whether they realize it or not, die size can mean plenty to the average consumer if a large die means lower yields/clock speeds and higher power consumption/prices.
 
Unfortunately from an NV standpoint, it's trying to make a product that the chips weren't designed for, as has been the case in the past.
ATI are going with multiple chips for the high end performance, and NV is going large single die which is huge and outperforms any single die ATI card.

NV trying to force 2 on a card every generation while still going for large dies kind of goes against their whole strategy (apart from their strategy of having the highest performing single "card" - at all costs).

At the end of the day, NV will still hold the performance crown, and ATI will be making their strategy continue and hopefully work even better, and consumers are the ones who win as the two sides compete with each other.
 
Originally posted by: Creig
Originally posted by: OCguy
Die size means nothing to your average consumer, only to online forums where people try and look for any sort of negative about the "other guy's" product.

Whether they realize it or not, die size can mean plenty to the average consumer if a large die means lower yields/clock speeds and higher power consumption/prices.

All of that is factored into the price and performance. If you have made the decision that the price and performance are up to your standards, it does not matter if it is two rats running around a wheel that make the card "tick."

It is much like the Zoners and the "native quad core" mantra regarding Kentsfield and Yorkie v Phenom.
 
Originally posted by: Lonyo
Unfortunately from an NV standpoint, it's trying to make a product that the chips weren't designed for, as has been the case in the past.
ATI are going with multiple chips for the high end performance, and NV is going large single die which is huge and outperforms any single die ATI card.

NV trying to force 2 on a card every generation while still going for large dies kind of goes against their whole strategy (apart from their strategy of having the highest performing single "card" - at all costs).

At the end of the day, NV will still hold the performance crown, and ATI will be making their strategy continue and hopefully work even better, and consumers are the ones who win as the two sides compete with each other.



ATi makes a dual card, it is "part of the plan." nV does it, and they are forcing it and are scared. 😕

Oh this place gets good around new launches. 😛
 
Originally posted by: RussianSensation
Originally posted by: Shaq
I think they may be bottlenecked by the PCI 2.0 slots though.

I highly doubt that.

I don't think even PCIe 2.0 is really necessary yet over PCIe 1.1. As long as a graphics solution can operate with data that is stored within its local video frame buffer memory, a videocard should operate close to its maximum performance, even if the PCI Express link width is limited to x8. I haven't seen any game push > 1GBs of memory usage and new cards should have more than that onboard I would imagine.

Also, if this was the case, the P55 chipset would be a complete failure since it will only do 8x/8x.

You may have seen very poor performance of P35 chipset (i.e. PCIe 1.0/1.1) in crossfire. That's only because it operates at 16x/4x which severely limits performance of the 2nd card. But say 975 chipset with PCIe 1.1s will probably be very close to PCIe 2.0 performance in CF. I haven't been able to find benches that prove otherwise. But it would be interesting to see.

Some games showed imorovement going from a PCIE 1.0 to 2.0. FSX and GTA4 improvements were rather large and Tom's Hardware showed that a 280 was about 10% slower on 1.0 depending on the game. If a dual GT300 is almost 4 times faster than a 280 how would it not be limited on a 2.0 slot in some games? FSX and GTA4 are not even GPU limited and that is possibly the reason. GPU limited games may use the local memory better and won't show as much a difference as those two are but there will still be a difference. Some people thought we would never need Sata 3.0 or USB 3.0 when version 2.0 came out and now with SSD's there is a need for them. I don't think they would be pushing new faster standards if there were absolutely no need for them.
 
Originally posted by: OCguy
Originally posted by: Creig
Originally posted by: OCguy
Die size means nothing to your average consumer, only to online forums where people try and look for any sort of negative about the "other guy's" product.

Whether they realize it or not, die size can mean plenty to the average consumer if a large die means lower yields/clock speeds and higher power consumption/prices.

All of that is factored into the price and performance. If you have made the decision that the price and performance are up to your standards, it does not matter if it is two rats running around a wheel that make the card "tick."

It is much like the Zoners and the "native quad core" mantra regarding Kentsfield and Yorkie v Phenom.

Yes, but the final street price of a video card is influenced by the prices of the components it takes to build it. So by the time the end consumer is looking at the card and is considering whether he wants one or not, the effect of the die cost has already been added to the price tag. If the cost is too high, the consumer may decide to opt for a lower priced model or one from another manufacturer.
 
Well, usually Nvidia puts a large gap between the launch of single gpu and multi gpu, so this probably means they acknowledge AMD is doing quite well with their X2 line, and decided to hurry with their X2 card as well
 
Originally posted by: waffleironhead
Even a blind squirrel gets a nut every once in awhile. It's pretty easy to predict a dual card in the future, as they are pretty common now.

Originally posted by: ShadowOfMyself
Well, usually Nvidia puts a large gap between the launch of single gpu and multi gpu, so this probably means they acknowledge AMD is doing quite well with their X2 line, and decided to hurry with their X2 card as well

In the past NVIDIA has only done the dual gpu cards after a a die shrink of the original launch core. This has been the case with the 7950GX2, 9800GX2, and GTX 295. So, if NVIDIA is coming out of the gate with a dual chip card, it would be a significant change in strategy IMO.
 
Originally posted by: OCguy
Originally posted by: Lonyo
Unfortunately from an NV standpoint, it's trying to make a product that the chips weren't designed for, as has been the case in the past.
ATI are going with multiple chips for the high end performance, and NV is going large single die which is huge and outperforms any single die ATI card.

NV trying to force 2 on a card every generation while still going for large dies kind of goes against their whole strategy (apart from their strategy of having the highest performing single "card" - at all costs).

At the end of the day, NV will still hold the performance crown, and ATI will be making their strategy continue and hopefully work even better, and consumers are the ones who win as the two sides compete with each other.



ATi makes a dual card, it is "part of the plan." nV does it, and they are forcing it and are scared. 😕

Oh this place gets good around new launches. 😛

http://images.anandtech.com/re...ideo/ATI/4850/amd3.png
http://images.anandtech.com/re...4800/diecomparison.png
http://images.anandtech.com/re...ideo/ATI/4850/amd4.png
 


So dual GPUs arent in the nV strategy, yet they have had one for how many generations now? And they are planning one for this gen?

It seems like thier strategy is to have both the top dual GPU card and the top single GPU card. It has been working so far, and it sounds like it may work for GT300 as well.

Your AMD slides show nothing other than AMD plans on using 2 GPUs to compete in the high end, but who is arguing against that?
 
Originally posted by: Lonyo
...
At the end of the day, NV will still hold the performance crown, and ATI will be making their strategy continue and hopefully work even better, and consumers are the ones who win as the two sides compete with each other.

Only time will tell.
 
Originally posted by: OCguy


So dual GPUs arent in the nV strategy, yet they have had one for how many generations now? And they are planning one for this gen?

It seems like thier strategy is to have both the top dual GPU card and the top single GPU card. It has been working so far, and it sounds like it may work for GT300 as well.

Your AMD slides show nothing other than AMD plans on using 2 GPUs to compete in the high end, but who is arguing against that?

Actually kinda like the "of AMD and Intel, which did IMC first?" arguments (answer: Intel did, 386SL w/IMC) the story of who did dual-gpu single-card first between AMD and NVidia is that NVidia did it first:

NVIDIA Single Card, Multi-GPU: GeForce 7950 GX2

Date: June 5th, 2006

Today marks the launch of the first GPU maker sanctioned single card / multi-GPU solution for the consumer market in quite some time. Not since Quantum3D introduced the Obsidian X24 have we seen such a beast (which, interestingly enough, did actual Scan Line Interleaving on a single card). This time around NVIDIA's flavor of SLI and PCIe are being used to connect two boards together for a full featured multi-GPU solution that works like a single card as far as the end user is concerned. No special motherboard is required, the upcoming 90 series driver will support the card, and there is future potential for DIY quad SLI. There is still a ways to go until NVIDIA releases drivers that will support quad SLI without the help of a system vendor, but they are working on it.

http://www.anandtech.com/video/showdoc.aspx?i=2769

Fast forward nearly 18 months to the HD3870 X2 release and you find Anand writing that both Nvidia and AMD were discussing the prospects of dual-GPU single cards.

ATI Radeon HD 3870 X2: 2 GPUs 1 Card, A Return to the High End

Date: January 28th, 2008

At the end of last year both AMD and NVIDIA hinted at bringing back multi-GPU cards to help round out the high end. The idea is simple: take two fast GPUs, put them together on a single card and sell them as a single faster video card.

AMD is the first out of the gates with the Radeon HD 3870 X2, based on what AMD is calling its R680 GPU.

http://www.anandtech.com/video/showdoc.aspx?i=3209

Doesn't exactly paint a story of Nvidia following in AMD's footsteps, NV was first with the GX2, then AMD came out with their X2 and said "we'll be doing this from now on" while NV was working on the same thing in parallel but had a release timeline that was staggered behind that of AMD's.
 
Originally posted by: OCguy


So dual GPUs arent in the nV strategy, yet they have had one for how many generations now? And they are planning one for this gen?

It seems like thier strategy is to have both the top dual GPU card and the top single GPU card. It has been working so far, and it sounds like it may work for GT300 as well.

Your AMD slides show nothing other than AMD plans on using 2 GPUs to compete in the high end, but who is arguing against that?

Nvidia's dual cards always have been lacking in terms of finesse and design compared to ATI's. Comparing Nvidia's multi-gpu cards to ATI's is like comparing the Pentium D(two cores meant for single operation taped together) to the Core 2 Duo(two cores that very clearly were built to work in pairs). Sure they both work, but it's clear which one was thought through and planned from the ground up and which one wasn't.

ATI makes small, cheap to produce, efficient chips that perform well stand alone and are intended from the ground up to work in a multi gpu design easily(Further support for this is that Sapphire made a 4850x2 without any help from ATI at all. They didn't take a long time to do it and have never had supply problems either). Nvidia's are not. Nvidia makes big, expensive single GPUs meant to be as fast as possible at any cost and if by chance ATI has something faster(which has been the case lately thanks to their dual gpu offerings), they tape two together in a week(figuratively).

This is exceedingly obvious when you look at the price, design, and dates of launch of the 4870x2 and the GTX 295. Sure the GTX 295 is 5% faster, but it is what, $140 more expensive?

Analogy: 4870x2 = pickup truck. GTX 295 = two sports cars linked together with a bungee cord with the sole intent of the company being able to say they make the vehicle with the most horse power.

Nvidia does accomplish what they set out to do(achieve the performance crown), but to suggest that they put the same level of effort and planning into it from the get go as ATI is absurd.
 
Back
Top