New 9800gx2/9800gtx/HD3870 X2 benchies

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

superbooga

Senior member
Jun 16, 2001
333
0
0
9800GT and 9800GTX are likely to be priced to reflect their performance, and seriously, what does ATI have in the $200 - $300 price segment? R700 is unlikely to be much faster, if at all, than a single G92, and it'll probably arrive at the same time GT200 does.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: superbooga
9800GT and 9800GTX are likely to be priced to reflect their performance, and seriously, what does ATI have in the $200 - $300 price segment? R700 is unlikely to be much faster, if at all, than a single G92, and it'll probably arrive at the same time GT200 does.

:confused:

Are you kidding? An 8800GT is only around 15-20% faster than an HD 3870, if that, and an 8800GTS 512MB is maybe 25% faster. R700 will likely be 2x RV670 in terms of performance, especially if the leaked specs are true... it is a next gen card, another league of performance compared to G92 which is no faster than the old 8800GTX.

High-end R700 will be 2xRV770, so it's the same situation... if a single RV770 is 2X RV670, then the HD 4870 X2 will be 2X the 3870 X2... nVidia will have a tough time competing with that with a single 65nm GPU and G92 isn't playing in the same league.

TBH I think ATI is going to win the next round with R700 > GT200. Perhaps GT200 will be a great chip, but nVidia doesn't have nearly the die space that ATI has and they will be competing with a single GPU against a 2-GPU X2 card. nVidia is faster now, but G92 is 70% larger than RV670. On 55nm, ATI has a lot of room to make their next gen card... with RV670 @ 192mm^2, they could easily have 50% more transistors and maintain an acceptable die size. nVidia is at 324mm^2 already, how much larger can they go on 65nm? If they go with 50% more transistors, they're at ~500mm^2, which is insane for a chip.
 

Syntax Error

Senior member
Oct 29, 2007
617
0
0
Considering much of this hoohah is coming out of benchmarks such as 3DMark, which we all know to be very "flexible" in its methodology of scoring, especially in regard to the CPU score, I'm going to hold this "data" with a boulder of salt. It's like the HD2900 3DMark score versus the 8800 series, the HD2900s are the cards that have set the world record, but you don't see them destroying 8800s in actual frames in games, do we?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Syntax Error
Considering much of this hoohah is coming out of benchmarks such as 3DMark, which we all know to be very "flexible" in its methodology of scoring, especially in regard to the CPU score, I'm going to hold this "data" with a boulder of salt. It's like the HD2900 3DMark score versus the 8800 series, the HD2900s are the cards that have set the world record, but you don't see them destroying 8800s in actual frames in games, do we?

Thats not what the guys on AMDzone said... :)
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: Extelleron
Originally posted by: superbooga
9800GT and 9800GTX are likely to be priced to reflect their performance, and seriously, what does ATI have in the $200 - $300 price segment? R700 is unlikely to be much faster, if at all, than a single G92, and it'll probably arrive at the same time GT200 does.

:confused:

Are you kidding? An 8800GT is only around 15-20% faster than an HD 3870, if that, and an 8800GTS 512MB is maybe 25% faster. R700 will likely be 2x RV670 in terms of performance, especially if the leaked specs are true... it is a next gen card, another league of performance compared to G92 which is no faster than the old 8800GTX.

High-end R700 will be 2xRV770, so it's the same situation... if a single RV770 is 2X RV670, then the HD 4870 X2 will be 2X the 3870 X2... nVidia will have a tough time competing with that with a single 65nm GPU and G92 isn't playing in the same league.

TBH I think ATI is going to win the next round with R700 > GT200. Perhaps GT200 will be a great chip, but nVidia doesn't have nearly the die space that ATI has and they will be competing with a single GPU against a 2-GPU X2 card. nVidia is faster now, but G92 is 70% larger than RV670. On 55nm, ATI has a lot of room to make their next gen card... with RV670 @ 192mm^2, they could easily have 50% more transistors and maintain an acceptable die size. nVidia is at 324mm^2 already, how much larger can they go on 65nm? If they go with 50% more transistors, they're at ~500mm^2, which is insane for a chip.

Anyone else here remember the talk before R600 launched? How "if the rumored specs are true it will destroy the GTX"?

It's really hard to guess what the performance of future cards will be and plan your purchases on guesses.

http://bp0.blogger.com/_4qvKWy...Y2Bk/s1600-h/rv770.jpg

They've doubled the TMUs according to this (good) but kept the 16 ROPs (bad), the VLIW Arch (bad), and the reliance on multiple cores for high end(bad).

I don't get "should be twice as fast" out of these specs- why do you?

http://www.siliconmadness.com/...ts-specifications.html

a dual GPU card will be the flagship.

Since the RV670 did about 528Gflops at 825MHz, this would mean that the new architecture isn't particularly tweaked, at least on the theoretical throughput front.

If this is indeed the form of the R700 architecture, it seems nothing more than a tweaked RV670, which was a tweaked R600.

The HD 3870X2 scales relatively well in some cases but a single card with the same power will wipe out the floor with the RV770X2.

If they're going with CF tech to compete again, it will be a long year for AMD. MultiCard tech should mainly be used when a single core can't bring that level of performance.

 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
It doesn't really matter who gets out on top, as long theres some good competition. But, I think it would be better for competition if AMD were to dominate with R700, because that will be a clear signal for Nvidia to get their stuff up to snuff again, considering the lackluster improvements we've been seeing the past 18 months, from g80 to g92. Btw, twice as fast is probably to much, not to long ago rumours had it that r700 would be 50% faster then R670 though.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: MarcVenice
It doesn't really matter who gets out on top, as long theres some good competition. But, I think it would be better for competition if AMD were to dominate with R700, because that will be a clear signal for Nvidia to get their stuff up to snuff again, considering the lackluster improvements we've been seeing the past 18 months, from g80 to g92.
Btw, twice as fast is probably to much, not to long ago rumours had it that r700 would be 50% faster then R670 though.

I don't think it works that way- the stuff we see now is based on decisions made years ago. It takes too long to develop chips for them to be reactionary.

G80>G90 is the same as any other product cycle:
1. Introduce core
2. Refine production process, and/or shrink die
3. Introduce core

From what I can tell the only thing that really varies is the length of time between core introductions.
 

angry hampster

Diamond Member
Dec 15, 2007
4,232
0
0
www.lexaphoto.com
Originally posted by: BFG10K
3DMark or not, I really don't have much hope for the 9800 GTX to be honest. To me it looks like nothing more than a respin in order to support Tri-SLI.

I agree. I really wonder how well it'll be supported with the new cards.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: nRollo
Originally posted by: Extelleron
Originally posted by: superbooga
9800GT and 9800GTX are likely to be priced to reflect their performance, and seriously, what does ATI have in the $200 - $300 price segment? R700 is unlikely to be much faster, if at all, than a single G92, and it'll probably arrive at the same time GT200 does.

:confused:

Are you kidding? An 8800GT is only around 15-20% faster than an HD 3870, if that, and an 8800GTS 512MB is maybe 25% faster. R700 will likely be 2x RV670 in terms of performance, especially if the leaked specs are true... it is a next gen card, another league of performance compared to G92 which is no faster than the old 8800GTX.

High-end R700 will be 2xRV770, so it's the same situation... if a single RV770 is 2X RV670, then the HD 4870 X2 will be 2X the 3870 X2... nVidia will have a tough time competing with that with a single 65nm GPU and G92 isn't playing in the same league.

TBH I think ATI is going to win the next round with R700 > GT200. Perhaps GT200 will be a great chip, but nVidia doesn't have nearly the die space that ATI has and they will be competing with a single GPU against a 2-GPU X2 card. nVidia is faster now, but G92 is 70% larger than RV670. On 55nm, ATI has a lot of room to make their next gen card... with RV670 @ 192mm^2, they could easily have 50% more transistors and maintain an acceptable die size. nVidia is at 324mm^2 already, how much larger can they go on 65nm? If they go with 50% more transistors, they're at ~500mm^2, which is insane for a chip.

Anyone else here remember the talk before R600 launched? How "if the rumored specs are true it will destroy the GTX"?

It's really hard to guess what the performance of future cards will be and plan your purchases on guesses.

http://bp0.blogger.com/_4qvKWy...Y2Bk/s1600-h/rv770.jpg

They've doubled the TMUs according to this (good) but kept the 16 ROPs (bad), the VLIW Arch (bad), and the reliance on multiple cores for high end(bad).

I don't get "should be twice as fast" out of these specs- why do you?

http://www.siliconmadness.com/...ts-specifications.html

a dual GPU card will be the flagship.

Since the RV670 did about 528Gflops at 825MHz, this would mean that the new architecture isn't particularly tweaked, at least on the theoretical throughput front.

If this is indeed the form of the R700 architecture, it seems nothing more than a tweaked RV670, which was a tweaked R600.

The HD 3870X2 scales relatively well in some cases but a single card with the same power will wipe out the floor with the RV770X2.

If they're going with CF tech to compete again, it will be a long year for AMD. MultiCard tech should mainly be used when a single core can't bring that level of performance.

This isn't the same as with R600. R600 was a new architecture and nobody could know how it would perform; everyone expected it to be faster because ATI has always had faster cards since the 9700 Pro... why would they expect anything else? We all know ATI screwed up with R600, but there is nothing wrong with the architecture. The problem was that the chip couldn't hit the clockspeeds it needed to hit on the 80nm process (because of heat issues), the lack of enough texture power, and lack of dedicated AA hardware.

With R700, we know pretty much what to expect - this isn't a completely new architecture, it's a refinement of R600/RV670. What exactly has been changed in terms of architecture is unknown, but we do know from the raw specifications how it will perform.

How am I getting "twice as fast?" Look at the chart you provided. According to that, RV770 has 480SP clocked at 1050MHz.

1.05GHz * 480 SP *2 = 1008
0.775GHz * 320SP * 2 = 496
RV770 = 2.03X RV670

In terms of texture power:

1.05GHz * 32 = 33.6
0.775GHz * 16 = 12.4
RV770 = 2.71X RV670

So RV770 has over 2X the shading power of RV670 and close to 3X the texture power. Texture power is a huge bottleneck of R600-based parts and if RV770 has 2.7X the texture power, this will improve performance greatly. You're also talking about 35% more pixel power and 2X the memory bandwidth of RV670.

This is also assuming that R700 isn't faster clock-for-clock, which it almost certainly will be. One thing that is likely to appear in R700 is dedicated AA hardware, another bottleneck of R600 that reduced performance.

Multi-GPU is going to be the way to go, whether you like it or not. If you want GPUs to continue to be 2X as fast every year, then you need to have more than one die involved on high-end cards. It's simply not economical to have a chip that is 500-600mm^2 in size and also, having one chip for the mainstream and high-end reduces R&D and design time.

Crossfire scales very well in virtually every situation except DX10 at this point; ATI's DX10 drivers need refinement in general, and I'm sure that this will be improved by the time that R700 comes about. ATI is clearly going the multi-GPU route and this will force them to improve their drivers.

In most cases HD 3870 X2 is a good deal faster than the 8800GTX, and HD 4870 X2 will be 2X+ faster than the 3870 X2. GT200 will have to be around 2.0-2.5X G80/G92 in terms of performance if it wants to keep up.










 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: nRollo
Originally posted by: MarcVenice
It doesn't really matter who gets out on top, as long theres some good competition. But, I think it would be better for competition if AMD were to dominate with R700, because that will be a clear signal for Nvidia to get their stuff up to snuff again, considering the lackluster improvements we've been seeing the past 18 months, from g80 to g92.
Btw, twice as fast is probably to much, not to long ago rumours had it that r700 would be 50% faster then R670 though.

I don't think it works that way- the stuff we see now is based on decisions made years ago. It takes too long to develop chips for them to be reactionary.

G80>G90 is the same as any other product cycle:
1. Introduce core
2. Refine production process, and/or shrink die
3. Introduce core

From what I can tell the only thing that really varies is the length of time between core introductions.

actually that is quite true
it's unbelievable that some people here really expect a next-gen to G80 to be out before now as most of the people with this kind of expectation have no clue about the planning that goes into creating a new GPU architecture.

G80 development began
when nv30 was "launched" in September 2002 and was developed from 'scratch'.

Most of these architectures take 4-5 years to develop :p

it doesn't matter what r700 is like ... the G9X variants are already set in stone .. and at THIS point all the AMD engineers can do is 'fiddle' with the r700 clockspeeds ... soon even that is too late to change.

 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
Originally posted by: Extelleron
Originally posted by: nRollo
Originally posted by: Extelleron
Originally posted by: superbooga
9800GT and 9800GTX are likely to be priced to reflect their performance, and seriously, what does ATI have in the $200 - $300 price segment? R700 is unlikely to be much faster, if at all, than a single G92, and it'll probably arrive at the same time GT200 does.

:confused:

Are you kidding? An 8800GT is only around 15-20% faster than an HD 3870, if that, and an 8800GTS 512MB is maybe 25% faster. R700 will likely be 2x RV670 in terms of performance, especially if the leaked specs are true... it is a next gen card, another league of performance compared to G92 which is no faster than the old 8800GTX.

High-end R700 will be 2xRV770, so it's the same situation... if a single RV770 is 2X RV670, then the HD 4870 X2 will be 2X the 3870 X2... nVidia will have a tough time competing with that with a single 65nm GPU and G92 isn't playing in the same league.

TBH I think ATI is going to win the next round with R700 > GT200. Perhaps GT200 will be a great chip, but nVidia doesn't have nearly the die space that ATI has and they will be competing with a single GPU against a 2-GPU X2 card. nVidia is faster now, but G92 is 70% larger than RV670. On 55nm, ATI has a lot of room to make their next gen card... with RV670 @ 192mm^2, they could easily have 50% more transistors and maintain an acceptable die size. nVidia is at 324mm^2 already, how much larger can they go on 65nm? If they go with 50% more transistors, they're at ~500mm^2, which is insane for a chip.

Anyone else here remember the talk before R600 launched? How "if the rumored specs are true it will destroy the GTX"?

It's really hard to guess what the performance of future cards will be and plan your purchases on guesses.

http://bp0.blogger.com/_4qvKWy...Y2Bk/s1600-h/rv770.jpg

They've doubled the TMUs according to this (good) but kept the 16 ROPs (bad), the VLIW Arch (bad), and the reliance on multiple cores for high end(bad).

I don't get "should be twice as fast" out of these specs- why do you?

http://www.siliconmadness.com/...ts-specifications.html

a dual GPU card will be the flagship.

Since the RV670 did about 528Gflops at 825MHz, this would mean that the new architecture isn't particularly tweaked, at least on the theoretical throughput front.

If this is indeed the form of the R700 architecture, it seems nothing more than a tweaked RV670, which was a tweaked R600.

The HD 3870X2 scales relatively well in some cases but a single card with the same power will wipe out the floor with the RV770X2.

If they're going with CF tech to compete again, it will be a long year for AMD. MultiCard tech should mainly be used when a single core can't bring that level of performance.

This isn't the same as with R600. R600 was a new architecture and nobody could know how it would perform; everyone expected it to be faster because ATI has always had faster cards since the 9700 Pro... why would they expect anything else? We all know ATI screwed up with R600, but there is nothing wrong with the architecture. The problem was that the chip couldn't hit the clockspeeds it needed to hit on the 80nm process (because of heat issues), the lack of enough texture power, and lack of dedicated AA hardware.

With R700, we know pretty much what to expect - this isn't a completely new architecture, it's a refinement of R600/RV670. What exactly has been changed in terms of architecture is unknown, but we do know from the raw specifications how it will perform.

How am I getting "twice as fast?" Look at the chart you provided. According to that, RV770 has 480SP clocked at 1050MHz.

1.05GHz * 480 SP *2 = 1008
0.775GHz * 320SP * 2 = 496
RV770 = 2.03X RV670

In terms of texture power:

1.05GHz * 32 = 33.6
0.775GHz * 16 = 12.4
RV770 = 2.71X RV670

So RV770 has over 2X the shading power of RV670 and close to 3X the texture power. Texture power is a huge bottleneck of R600-based parts and if RV770 has 2.7X the texture power, this will improve performance greatly. You're also talking about 35% more pixel power and 2X the memory bandwidth of RV670.

This is also assuming that R700 isn't faster clock-for-clock, which it almost certainly will be. One thing that is likely to appear in R700 is dedicated AA hardware, another bottleneck of R600 that reduced performance.

Multi-GPU is going to be the way to go, whether you like it or not. If you want GPUs to continue to be 2X as fast every year, then you need to have more than one die involved on high-end cards. It's simply not economical to have a chip that is 500-600mm^2 in size and also, having one chip for the mainstream and high-end reduces R&D and design time.

Crossfire scales very well in virtually every situation except DX10 at this point; ATI's DX10 drivers need refinement in general, and I'm sure that this will be improved by the time that R700 comes about. ATI is clearly going the multi-GPU route and this will force them to improve their drivers.

In most cases HD 3870 X2 is a good deal faster than the 8800GTX, and HD 4870 X2 will be 2X+ faster than the 3870 X2. GT200 will have to be around 2.0-2.5X G80/G92 in terms of performance if it wants to keep up.



Hi, You seem to know quite a bit about ati's rv770 specs. Do you think they will implement quad PCB in crossfire X with 8 gpus?

I'm intersted in finding out if we will be able to run 4x 4870X2's. Also do you think 2 x 4870x2's in crossfire-x is faster than 4 x 4870's in crossfire-x? (4 gpu vs. 4 gpu)
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
sure ... i can answer it

... eventually ... it appears to be AMD's goal ... no matter *what* nvidia brings out, they just need to make *more* cores work well together
-much cheaper process

it this not the very beginnings of Fusion?
:confused:
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
if 4870x2 is similar to 3870x2 then it will have slightly lower clocks than 4870, so it should be slightly slower in quad-fire than 4x4870 would be. They should be extremely close, however. AMD should push for octo-fire just to get people forced into the spider platform/790fx, but that will probably be a programming nightmare. Their resources are better spent refining their current offerings imho.

I would be very surprised if r700 is anywhere close to 2xRv670. The clocks look nice and the arch improvements are strong, too, but this is just a refinement of a current offering. I think that if we get 50% improvement then we should consider ourselves lucky. Event that will be enough to trounce the 9xxx series, but GT200 is just around the corner... ever since november I think that the gpu world has gotten very interesting, and this year is shaping up to be a lot of fun for all of us!
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: Extelleron
This isn't the same as with R600. R600 was a new architecture and nobody could know how it would perform; everyone expected it to be faster because ATI has always had faster cards since the 9700 Pro... why would they expect anything else? We all know ATI screwed up with R600, but there is nothing wrong with the architecture. The problem was that the chip couldn't hit the clockspeeds it needed to hit on the 80nm process (because of heat issues), the lack of enough texture power, and lack of dedicated AA hardware.

With R700, we know pretty much what to expect - this isn't a completely new architecture, it's a refinement of R600/RV670. What exactly has been changed in terms of architecture is unknown, but we do know from the raw specifications how it will perform.
It still has the lack of dedicated AA hardware AFAIK, and you didn't mention the VLIW shader arch that limits shader efficiency to varying degrees in every game.

Originally posted by: Extelleron
How am I getting "twice as fast?" Look at the chart you provided. According to that, RV770 has 480SP clocked at 1050MHz.

1.05GHz * 480 SP *2 = 1008
0.775GHz * 320SP * 2 = 496
RV770 = 2.03X RV670

In terms of texture power:

1.05GHz * 32 = 33.6
0.775GHz * 16 = 12.4
RV770 = 2.71X RV670

So RV770 has over 2X the shading power of RV670 and close to 3X the texture power. Texture power is a huge bottleneck of R600-based parts and if RV770 has 2.7X the texture power, this will improve performance greatly. You're also talking about 35% more pixel power and 2X the memory bandwidth of RV670.

Hmmm. So the 8800GT with 112 Stream Processors should be much faster than the 9600GT with 64 Stream Processors?
Looks to me like that line of reasoning doesn't always pay off- still not seeing your "twice as fast".

Originally posted by: Extelleron
This is also assuming that R700 isn't faster clock-for-clock, which it almost certainly will be. One thing that is likely to appear in R700 is dedicated AA hardware, another bottleneck of R600 that reduced performance.
I've seen no reason to expect this at all, and don't think a core change on this level should be expected. Do you have a link to some reason why you think this is coming, or is this just "wishful thinking"?

Originally posted by: Extelleron
Multi-GPU is going to be the way to go, whether you like it or not. If you want GPUs to continue to be 2X as fast every year, then you need to have more than one die involved on high-end cards. It's simply not economical to have a chip that is 500-600mm^2 in size and also, having one chip for the mainstream and high-end reduces R&D and design time.
I don't think anyone outside of AMD agrees with this, and I may be the world's biggest fan of multi GPU. People buying high end and paying high dollars don't care about how cheap it is for companies to make chips- they care about performance and image quality. Even where multi GPU works (and it doesn't always) scaling is all over the board. Single GPUs offer consistent performance at all games- there's never a situation where "oops, my $400+ card is now performing like a $200 card, but I'm still glad the maker saved money developing and producing it".

Multi GPU has two uses and two uses only, in my (and most people's) opinion:
1. Getting a level of performance/image quality single GPU can't offer.
2. Secondarily, possibly as a way to get a level of performance cheaper if you buy another card down the road for a bargain price.

Originally posted by: Extelleron
Crossfire scales very well in virtually every situation except DX10 at this point; ATI's DX10 drivers need refinement in general, and I'm sure that this will be improved by the time that R700 comes about. ATI is clearly going the multi-GPU route and this will force them to improve their drivers.
This isn't true at all, Crossfire (and SLi for that matter) don't offer across the board uniform scaling. CF on a card is still hobbled by not being able to disable it, lack of ability to create or edit profiles, and lack of ability to force any multi-render mode other than one type of AFR.

We have no way whatsoever of knowing what future drivers will bring- if you would have asked me three years ago when Crossfire launched if ATi would allow you to have the driver flexibility SLi has had since day one I would have said they would. Didn't turn out that way.

As this multiGPU path was decided years ago, not yesterday, why haven't these changes been implemented yet? And why do you think they will be now?

Originally posted by: Extelleron
In most cases HD 3870 X2 is a good deal faster than the 8800GTX, and HD 4870 X2 will be 2X+ faster than the 3870 X2. GT200 will have to be around 2.0-2.5X G80/G92 in terms of performance if it wants to keep up.
1. Is it?
2. Again, you're only speculating that it will be two times faster, and even basing that on hopes I haven't seen published by any reputable source.
3. What GT200 will "have to be" is a matter of perspective. Even if it turned out 15% slower than a R700 most people would still buy a GT200 because they wouldn't have the issues of variable scaling, no scaling, AFR lag, waiting for driver profiles, etc.
4. It is two GPUs competing with one, and it did launch well over a year after the GTX, so if it couldn't beat the GTX sometimes AMD would have a whole lot to worry about.

This isn't looking like it's going to be a win for AMD Extelleron, just another interesting alternative.


Note: this post should in no way be construed as me claiming any future NVIDIA product will be faster than a R700. This post is about me disagreeing with some rather large "leaps of faith" in regard to assuming R700s will be two times faster than 3870X2s, and how multi GPU of low end cores can be better for end users.









 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
isn't 3870x2 Crossfire faster than any SLI rig right now?

I've noticed all the world records are made on 3870x2 crossfire rigs. I'd say amd is on top of their game.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: jaredpace
isn't 3870x2 Crossfire faster than any SLI rig right now?

I've noticed all the world records are made on 3870x2 crossfire rigs. I'd say amd is on top of their game.

No, 3870X2 Crossfire rigs are only fastest at 3DMark06 synthetic benchmarks as far as I know. Can you link us to a game benchmark where AMD is on top of 3 Way SLi?

I don't think there are any.

AMD is leveraging a fairly low end GPU (beneath 8800U, 8800GTX, 8800GTS, and sometimes even 9600GT) into some highend performance by stacking them.

The problem with this is you can stack the higher GPU as well, and sometimes it only takes two of them to beat four AMD GPUs.

 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
so you're saying 3870X2 CrossfireX is faster in 3dmark06, but 8800Ultra Tri-SLI is faster in games? I'd like to see a bench as much as you.

EDIT: well You're correct (kinda) they tested crossfire x with a 2.6ghz phenom and tri-sli with a 3.33ghz qx9650.


Games: COD4, UT3, and bioshock.

Res:
Unreal Tournament 3 2560 x 1600 0X 16X Highest in-game
Bioshock 2560 x 1600 0X 1X Highest in-game
Call of Duty 4 2560 x 1600 4X 16X Highest in-game

rigs:

3 evga 8800 ultra in Tri - SLI with 3.33ghz qx9650

VS.

2 ati 3870X2 in CrossFire X with 2.6ghz amd phenom

performance in FPS:

UT3:
nvidia:143
ati:114


Bioshock:
nvidia:103
ati:92


Cod4:
nvidia:100
ati:93


If i could just find out what percentage faster a qx9650 @ 3.3 is over a phenom @ 2.6, i could do a relative comparison.

I bet these 3 games look incredible at that resolution and framerate.

nvidia:
http://www.anandtech.com/video/showdoc.aspx?i=3183&p=3

ati:
http://www.anandtech.com/video/showdoc.aspx?i=3232&p=3



Funny point I'd like to share:

My friend, who knows nothing about computers and naming conventions, likes to laugh at me when I talk about crap like this comparison to him. I say things like, "hey matt, i was just checking out this ATI Radeon HD3870X2 CrossFire-X Versus Nvidia GeForce 8800 Ultra TRI-SLI comparison." He responds like, "uhhh what? ...Geforce Niner Ti 5000 Gigabyter lan blaster 6000XL promo card, What?" (Just rehashing a bunch of jargon he has heard me talk about over time). It sounds so funny coming from someone who doesn't know what a graphics card is and It cracks me up every time.

 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: jaredpace
so you're saying 3870X2 CrossfireX is faster in 3dmark06, but 8800Ultra Tri-SLI is faster in games? I'd like to see a bench as much as you.

Sure- check out my thread here with benches from this site:

Quad Fire compared to 3 Way SLi Benches

Now, some have said this isn't a fair comparison because the Quad Fire was tested with a Phenom at 2.6GHz, and the 3 Way SLi was tested on a 3.3Ghz Quad.

However, Anand himself said:
When testing four GPUs we tend to run at very high, GPU bound, resolutions making the choice of CPU much less of an issue. If anything, AMD was hurting itself by forcing Phenom upon us but it figured that any performance deficit due to CPU choice wouldn't be too great thanks to the GPU-limited nature of most of the tests we'd be running.

Also, in ATs test of 3 way Sli, only Crysis showed a significant difference between a 2.6 and 3.3GHz processor.
We feel kind of silly even entertaining this question, but yes, if you want to build a system with three 8800 Ultras, you don't need to spend $1000 on a CPU. You can get by with a 2.66GHz chip just fine.


Last- for whatever reason- the R6XX GPUs have always been faster at 3DMark06, and never close in games. That is a given. For whatever reason, the R6XX GPUs just kick ass in 3Dmark.

 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
hah nice post, check my edit! :)

Still would like to see the tests ran on the same cpu. pretty gay amd wanted a phenom on that test.


Forgot to add this part too:
price (for video cards):

Ati: $860.00

Nvidia: $1840.00


but hey 1000 or 2000 on video cards! Whats the difference? :)
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: jaredpace
hah nice post, check my edit! :)

Still would like to see the tests ran on the same cpu. pretty gay amd wanted a phenom on that test.


Forgot to add this part too:
price (for video cards):

Ati: $860.00

Nvidia: $1840.00


but hey 1000 or 2000 on video cards! Whats the difference? :)

Heh- It's all good- I would've been pretty shocked if they could wrangle 3 way beating performance out of RV670s.


In my eyes, the main problem with the Quadfire vs 3 way SLi question (or even highend 2 way Sli) is when the scaling isn't happening a single RV670 is about the LAST thing you'd want to run a 25X16 monitor, and a GTX or Ultra can handle it pretty well.

Oh yeah- and on the $1000 vs $2000 thing:

Only people living in vans by the river or refridgerator boxes can't afford $2000 for video cards!

LOL- I used to love to say that as a joke here. Seriously, obviously 3 way Ultras is a "rare air, well off buyer" solution and can only be compared to QuadFire on a "flagship to flagship" basis. (these rigs wouldn't be competing except for the people who don't care about $1000)

 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
Originally posted by: nRollo


In my eyes, the main problem with the Quadfire vs 3 way SLi question (or even highend 2 way Sli) is when the scaling isn't happening a single RV670 is about the LAST thing you'd want to run a 25X16 monitor, and a GTX or Ultra can handle it pretty well.

good point. when it comes down to the single cards due to lack of scaling, the ultra will pwn it. And whats up with crysis on 3 and 4 gpu and 2 3 and 4 core cpus? It hardly scales at all, (much better on sli) is it just THAT demanding?

I'm starting to think it would be difficult to program a worse performing game.

Crysis Vs. That 3dmark06 cpu test.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: jaredpace
Originally posted by: Extelleron
Originally posted by: nRollo
Originally posted by: Extelleron
Originally posted by: superbooga
9800GT and 9800GTX are likely to be priced to reflect their performance, and seriously, what does ATI have in the $200 - $300 price segment? R700 is unlikely to be much faster, if at all, than a single G92, and it'll probably arrive at the same time GT200 does.

:confused:

Are you kidding? An 8800GT is only around 15-20% faster than an HD 3870, if that, and an 8800GTS 512MB is maybe 25% faster. R700 will likely be 2x RV670 in terms of performance, especially if the leaked specs are true... it is a next gen card, another league of performance compared to G92 which is no faster than the old 8800GTX.

High-end R700 will be 2xRV770, so it's the same situation... if a single RV770 is 2X RV670, then the HD 4870 X2 will be 2X the 3870 X2... nVidia will have a tough time competing with that with a single 65nm GPU and G92 isn't playing in the same league.

TBH I think ATI is going to win the next round with R700 > GT200. Perhaps GT200 will be a great chip, but nVidia doesn't have nearly the die space that ATI has and they will be competing with a single GPU against a 2-GPU X2 card. nVidia is faster now, but G92 is 70% larger than RV670. On 55nm, ATI has a lot of room to make their next gen card... with RV670 @ 192mm^2, they could easily have 50% more transistors and maintain an acceptable die size. nVidia is at 324mm^2 already, how much larger can they go on 65nm? If they go with 50% more transistors, they're at ~500mm^2, which is insane for a chip.

Anyone else here remember the talk before R600 launched? How "if the rumored specs are true it will destroy the GTX"?

It's really hard to guess what the performance of future cards will be and plan your purchases on guesses.

http://bp0.blogger.com/_4qvKWy...Y2Bk/s1600-h/rv770.jpg

They've doubled the TMUs according to this (good) but kept the 16 ROPs (bad), the VLIW Arch (bad), and the reliance on multiple cores for high end(bad).

I don't get "should be twice as fast" out of these specs- why do you?

http://www.siliconmadness.com/...ts-specifications.html

a dual GPU card will be the flagship.

Since the RV670 did about 528Gflops at 825MHz, this would mean that the new architecture isn't particularly tweaked, at least on the theoretical throughput front.

If this is indeed the form of the R700 architecture, it seems nothing more than a tweaked RV670, which was a tweaked R600.

The HD 3870X2 scales relatively well in some cases but a single card with the same power will wipe out the floor with the RV770X2.

If they're going with CF tech to compete again, it will be a long year for AMD. MultiCard tech should mainly be used when a single core can't bring that level of performance.

This isn't the same as with R600. R600 was a new architecture and nobody could know how it would perform; everyone expected it to be faster because ATI has always had faster cards since the 9700 Pro... why would they expect anything else? We all know ATI screwed up with R600, but there is nothing wrong with the architecture. The problem was that the chip couldn't hit the clockspeeds it needed to hit on the 80nm process (because of heat issues), the lack of enough texture power, and lack of dedicated AA hardware.

With R700, we know pretty much what to expect - this isn't a completely new architecture, it's a refinement of R600/RV670. What exactly has been changed in terms of architecture is unknown, but we do know from the raw specifications how it will perform.

How am I getting "twice as fast?" Look at the chart you provided. According to that, RV770 has 480SP clocked at 1050MHz.

1.05GHz * 480 SP *2 = 1008
0.775GHz * 320SP * 2 = 496
RV770 = 2.03X RV670

In terms of texture power:

1.05GHz * 32 = 33.6
0.775GHz * 16 = 12.4
RV770 = 2.71X RV670

So RV770 has over 2X the shading power of RV670 and close to 3X the texture power. Texture power is a huge bottleneck of R600-based parts and if RV770 has 2.7X the texture power, this will improve performance greatly. You're also talking about 35% more pixel power and 2X the memory bandwidth of RV670.

This is also assuming that R700 isn't faster clock-for-clock, which it almost certainly will be. One thing that is likely to appear in R700 is dedicated AA hardware, another bottleneck of R600 that reduced performance.

Multi-GPU is going to be the way to go, whether you like it or not. If you want GPUs to continue to be 2X as fast every year, then you need to have more than one die involved on high-end cards. It's simply not economical to have a chip that is 500-600mm^2 in size and also, having one chip for the mainstream and high-end reduces R&D and design time.

Crossfire scales very well in virtually every situation except DX10 at this point; ATI's DX10 drivers need refinement in general, and I'm sure that this will be improved by the time that R700 comes about. ATI is clearly going the multi-GPU route and this will force them to improve their drivers.

In most cases HD 3870 X2 is a good deal faster than the 8800GTX, and HD 4870 X2 will be 2X+ faster than the 3870 X2. GT200 will have to be around 2.0-2.5X G80/G92 in terms of performance if it wants to keep up.



Hi, You seem to know quite a bit about ati's rv770 specs. Do you think they will implement quad PCB in crossfire X with 8 gpus?

I'm intersted in finding out if we will be able to run 4x 4870X2's. Also do you think 2 x 4870x2's in crossfire-x is faster than 4 x 4870's in crossfire-x? (4 gpu vs. 4 gpu)

Right now I'm just speculating, like everyone else, on rumored specifications of R700 found here: http://bp0.blogger.com/_4qvKWy...Y2Bk/s1600-h/rv770.jpg

I don't think you're going to see the ability to run 4x 4870 X2's... scaling is bad enough from 2 -> 4 GPUs and I'm sure 4 -> 8 would be even worse. There's just no need for that right now, anyway; I don't think even Crysis would be a problem at 2560x1600 with 2 HD 4870 X2's in Crossfire or 2x GT200 in SLI.

As for 4x HD 4870 vs 2x HD 4870 X2... based on the rumored specs 4x HD 4870 would be slightly better but almost certainly more expensive. Rumored specs put both the X2 and 4870 to have 1GB of GDDR5, but with the 4870 X2 that would be 512MB per GPU.

 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Addressing Rollo again...

It still has the lack of dedicated AA hardware AFAIK, and you didn't mention the VLIW shader arch that limits shader efficiency to varying degrees in every game.

That's not really a limitation we have to worry about. AMD's R600 shader architecture is perhaps not the best for today's games, but that does not matter in terms of relative performance between R600 and R700.

Hmmm. So the 8800GT with 112 Stream Processors should be much faster than the 9600GT with 64 Stream Processors?

The 8800GT vs 9600GT is a special case and the problem there is because of several reasons:

-Lack of sufficient memory banwidth: The 8800GT isn't able to utilize its full shading power because it is limited by memory bandwidth in many games... it has nearly the shading power of the GTX but doesn't perform close to it, that is why. The 9600GT has the exact same memory bandwidth as the 8800GT.

-Lack of sufficient ROP performance - The 8800GT has only 16 ROPs, which also seem to limit performance. For comparison, the GTX has 24 ROPs and the G80 GTS 20 ROP. The 9600GT actually has more pixel performance than the 8800GT; both have 16 ROPs, but the GT works at a higher frequency.

-nVidia's shady trick with the Linkboost on nVidia chipsets gives the 9600GT an automatic "overclock" with an increase in PCI-E bus frequency, this gives it an advantage as well.

-New driver: 9600GT reviews have been using FW 174.xx for the 9600GT and earlier drivers for the 8800GT.

-Higher clockspeeds: The 9600GT is clocked around 10% higher than the 8800GT.

That's why the 9600GT is nearly as fast - it isn't some kind of magic, it's a few clever tricks by nVidia and the 8800GT is just really bottlenecked by other parts of the card, not the shaders.

I've seen no reason to expect this at all, and don't think a core change on this level should be expected. Do you have a link to some reason why you think this is coming, or is this just "wishful thinking"?

You have no reason to expect that ATI might tweak the core in a new GPU? This happens every time, 9800->X800 was similar but you saw some changes, heck X850->X1800 was similar but with architectural tweaks. nVidia went from the 6800->7800 and improved IPC as well. Heck, R600 - > RV670 improved IPC in a number of situations especially w/ DX10 and that's with 1/2 the memory bandwidth. R700 isn't going to be R600 with 1.5X SPs; it's going to be the same basic architecture, but there will be differences. That I guarantee you.

As for dedicated AA hardware, that's my own speculation on what AMD should be doing. I don't know what tweaks will be in R700, but I can guarantee you 99% they will be there.

I don't think anyone outside of AMD agrees with this, and I may be the world's biggest fan of multi GPU. People buying high end and paying high dollars don't care about how cheap it is for companies to make chips- they care about performance and image quality. Even where multi GPU works (and it doesn't always) scaling is all over the board. Single GPUs offer consistent performance at all games- there's never a situation where "oops, my $400+ card is now performing like a $200 card, but I'm still glad the maker saved money developing and producing it".

It doesn't matter what enthusiasts think about it - that's what is coming. It's not a question of if, it's a question of when. nVidia isn't going to be manufacturing a chip 500-600mm^2 to compete against 2 AMD chips that are <300mm^2 in size. Their purpose is to make money and that's not the way to do it.

This isn't true at all, Crossfire (and SLi for that matter) don't offer across the board uniform scaling. CF on a card is still hobbled by not being able to disable it, lack of ability to create or edit profiles, and lack of ability to force any multi-render mode other than one type of AFR.

Crossfire and SLI provide great scaling - other than in DX10 and games that nobody cares about, scaling is great. Show me a popular game in which Crossfire doesn't scale well (outside of Crysis DX10, which I have acknowledged). In a few games the HD 3870 X2 is limited by the RV670 core, but almost never it is limited by Crossfire.

Anandtech themselves said that the HD 3870 X2 was the most seamless multi-GPU card they ever had and it worked just like a single-GPU card. Crossfire is just as good as SLI these days; it started off bad with X800 CF being horrible, but you can't say that SLI is better any longer.







 

superbooga

Senior member
Jun 16, 2001
333
0
0
nVIDIA has been far better at actually making money than ATI since the x800/6800 era. In the past, ATI was constantly late and overdesigned the chip, possibly offering better performance for the future but not enough to justify its costs. That is not a good thing in the semiconductor industry. You have to factor in whats feasible and the costs required, then build the best possible chip given those constraints. ATI has never really been able to perfect that.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: superbooga
nVIDIA has been far better at actually making money than ATI since the x800/6800 era. In the past, ATI was constantly late and overdesigned the chip, possibly offering better performance for the future but not enough to justify its costs. That is not a good thing in the semiconductor industry. You have to factor in whats feasible and the costs required, then build the best possible chip given those constraints. ATI has never really been able to perfect that.

Regardless, in the past buying an ATI has always been a better decision than buying an nVidia card. Just look at the X1950 Pro vs 7900GS... they seemed like good competitors a year or so ago at the $199 price point... now the X1950 Pro is close to 2X faster in games like Crysis.