Leaked ATI S.I. 6870 benchmark

Page 23 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
For me the biggest decision would be my CPU. I am pretty much the opposite of toyota, I believe GPU >>>>> CPU for gaming. But with that said, I've had the same motherboard since the Radeon 2900XT days and the same CPU since I got a 4870. Now I am using this combo with my 5870. If I got a 68xx, would my CPU still be up to task? I imagine it has to start limiting me sooner or later. A new card I can swing, a new mobo/DDR3/CPU/card, the wife won't go for. :/

It seems like AMD's GPU division is pulling ahead of the CPU division by a good amount.

Anyway, these are the dilemmas that new video cards bring. :)
I love these kinds of "dilemmas" :D. Seriously though, I'm under the "make up for architecture with clockspeed" school of thought, so crank it higher if you can. But a PhII 940 @ 3.6GHz should be more than capable of cranking out 60FPS+ in any game coming down the pipe (otherwise, good luck to those folks running stock); I wouldn't worry at all.
I just want to point out that while Nvidia selling cards at a loss might be true, we really don't know. We have guesses by Charlie, etc. But I wouldn't consider Charlie an unbiased source (I do like his site and articles, but one must take into account that he likes AMD and Intel and dislikes Nvidia--least he's honest about it).

We don't really have any evidence that Nvidia cards are selling at a loss. For all we know they are just selling their cards for less profit than AMD, or their partners are making less profit than AMD's partners. We just aren't privy to that info. So many factors that we just don't know. However, Nvidia's financial situation isn't that bad. Yeah, their stock price, I know. But they aren't in ANY danger of going under.

As enthusiasts, we need both nvidia and AMD (and Intel) to do well.
I agree, NVIDIA isn't in a good position right now, but they need years of this kind of performance before people are right to run around yelling about bankruptcy/doomsday/the sky is falling.

The line up for the 6xxx series is very interesting, especially because it seems to have shifted towards the high-end. Call this wicked speculation, but I'm betting AMD is placing a lot of faith in Fusion. Things are going to get very interesting in the next couple of months, that's for sure :cool:.
 

Will Robinson

Golden Member
Dec 19, 2009
1,408
0
0
The only person mouthing off about bankruptcies is Wreckage who swears AMD is gone by 2012 in a recent thread.
I'll enjoy reminding him of that.:whiste:
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
The only person mouthing off about bankruptcies is Wreckage who swears AMD is gone by 2012 in a recent thread.
I'll enjoy reminding him of that.:whiste:

In this instance Wreckage could be correct....Do you watch TV? Did you ever see the special on I think discovery channel about 2012 being the end of the world possibly....At least to the Myan's :D
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I agree, NVIDIA isn't in a good position right now, but they need years of this kind of performance before people are right to run around yelling about bankruptcy/doomsday/the sky is falling.

The reason I created this thread regarding Fermi was to try to get an understanding of why Fermi was a natural progression of GPU design for NV given their product lines. In fact, if it wasn't for Fermi, then NV would be in trouble. NV clearly understands it can't always be #1 in the discrete space every generation. However, their cash cow is professional graphics. This is where 31% of the company's profits come from compared to just 19% for discrete graphics.

In addition, it's pretty clear that NV needs to sell graphics cards as multi-purpose solutions to other/new markets where there is growth. After all, the desktop market is neither the fastest growing segment, nor the most lucrative in terms of margins.

With Fermi architecture, NV is even better positioned in the FAR more lucrative professional graphics segment, as well as GPGPU compute space. It has already taken another 2.2% market share away from ATi in this segment. The execution of Fermi architecture was poor no doubt. However, that is not to say that the architecture itself is poor. You get 8x the geometry performance compared to GT200 and superior tessellation. This will likely be a great foundation for Fermi II, etc.

"The technology and market research firm reports that the industry shipped 795 thousand of workstations worldwide in Q2, resulting in sequential growth of 9.6% and a year-over-year increase of 32%."

If you understand what NV's business lines are, then it becomes pretty obvious that unlike ATI which primarily focuses on computer/notebook graphics, NV needs a more broad design in order to maintain/grow in other areas where ATI hardly competes. As a result you get a bloated GPU, which is large, and expensive to produce - hardly a sound strategy if you want the best GPU for 3D-graphics.
 
Last edited:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
So Basically nVidia is trying to cover CPU and graphic market with an All In One solution, AMD doesn't feel the urge to lead GPGPU performance because it may hurt its CPU sales, but creating a balanced approach with a platform (Racks of GPU's and CPU's or Fusion), it may suit them better.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
So Basically nVidia is trying to cover CPU and graphic market with an All In One solution, AMD doesn't feel the urge to lead GPGPU performance because it may hurt its CPU sales, but creating a balanced approach with a platform (Racks of GPU's and CPU's or Fusion), it may suit them better.

More or less. Basically, starting with GT200, NV has been moving towards a divergent path from ATI - their goal is to make GPU a "universal device" which can be used to accelerate various applications, not just games. Hence the push for programmability.

"We expect that operating system will eventually treat CPU cores and GPU cores as peers - scheduling work for both types of cores. However moving work from a CPU core to a GPU core or vice versa would be very sub-optimal," stressed the chief scientist of Nvidia.

I think NV needs to promote the GPU as the most important part of a computer if they see any growth for a GPU device in the future (I mean eventually once we get into realistic graphics with ray-tracing in 20-30+ years, GPU's differentiating factors may be eroded since probably any GPU will be able to do realistic graphics). This is why NV needs the GPU to be much more than a device for graphics.

I can definitely see AMD wanting to compete in GPGPU space at some point, but it takes an enormous amount of marketing, software development and relationship management. Given that AMD as a firm doesn't really have excess cash flow to throw at markets which are still in their infancy such as GPGPU space (financial institutions, hospitals/medical, military applications, etc.), they would probably rather re-invest any left over cash flows from the ATI division straight back into their CPU unit which has Intel to worry about.
 
Last edited:

Genx87

Lifer
Apr 8, 2002
41,091
513
126
So Basically nVidia is trying to cover CPU and graphic market with an All In One solution, AMD doesn't feel the urge to lead GPGPU performance because it may hurt its CPU sales, but creating a balanced approach with a platform (Racks of GPU's and CPU's or Fusion), it may suit them better.

Yup. When you think about GPGPU it competes with their CPU division in the HPC space. The problem I see them having in the future is at some point their GPU will have to follow Nvidia's lead or risk losing valuable market share to Tesla in many HPC applications with their CPU.

Intel appears to me to understand this but I am guessing Larrabee contained way too many Nvidia patents. And Nvidia is suing the crap out of them right now over those. Once that is settled I expect Intel to enter that market with Larrabee. Now, whether or not that product can compete with Nvidia is remain to be seen.

And while we talk about costs associated with Fermi. Building a single chip isnt the end of the world as the costs are spread out over gaming and professional cards. And the professional cards bring in a poopload of profit.

Right now AMD is content delivering a product for gaming. Nvidia is content delivering a product for gaming and GPGPU. But I think AMD is being a bit shortsighted by not ironing out their GPGPU situation right now. It wont get any easier as Nvidia plows ahead with tesla projects and gets cuda into every nook and cranny.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
From the article after translation.

Antilles 6990
Cayman XT 6970
Cayman PRO 6950
Barts XT 6870
Barts PRO 6850
Juniper XT 6770
Juniper LE 6750
Turks 66xx / 65xx
Caicos 63xx

Since in all the photos we are seeing of the 6970 single-gpu flagship cards they have an 8+6pin power config, perhaps their dual-gpu 6990 will be using two Barts XT cores due to power considerations ?

Just seems there is a chance the 6 series single-gpu flagship is going to be a big, hot and power hungry chip.

It will be really impressive if it is not as much a hog as GTX 480 was, with the additional performance over the 480.
 
Last edited:

zebrax2

Senior member
Nov 18, 2007
977
70
91
Yup. When you think about GPGPU it competes with their CPU division in the HPC space. The problem I see them having in the future is at some point their GPU will have to follow Nvidia's lead or risk losing valuable market share to Tesla in many HPC applications with their CPU.

Intel appears to me to understand this but I am guessing Larrabee contained way too many Nvidia patents. And Nvidia is suing the crap out of them right now over those. Once that is settled I expect Intel to enter that market with Larrabee. Now, whether or not that product can compete with Nvidia is remain to be seen.

And while we talk about costs associated with Fermi. Building a single chip isnt the end of the world as the costs are spread out over gaming and professional cards. And the professional cards bring in a poopload of profit.

Right now AMD is content delivering a product for gaming. Nvidia is content delivering a product for gaming and GPGPU. But I think AMD is being a bit shortsighted by not ironing out their GPGPU situation right now. It wont get any easier as Nvidia plows ahead with tesla projects and gets cuda into every nook and cranny.

AMD and Intel would probably use the GPU side of their upcoming processors to handle the GPGPU side of things in the future and for the majority i think that would be enough. Only if you need the extra performance would you really need the power of a dedicated card. This is at least what i think they would do for the consumer market so maybe it isn't that short sighted at all.
 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
It will be really impressive if it is not as much a hog as GTX 480 was, with the additional performance over the 480.

^This would be my guess. Cayman XT 10-15% faster than a GTX480, 5-10% faster than a 'GTX485'. Cayman XT might be within 5-10% trailing the 5970 and might pass it sometimes (poor x-fire performance, tessellation), however I could still see the GTX480 beating Cayman in a few benches where Ati's driver support is struggling such as Metro in DX11, Borderlands, or Farcry2 with 4x/8x AA, and of course all the Phys-X games like Mafia2, Batman, and others when setting physics on high. Although, in total, having an average lead of 10-15% over the 480 across the board. With 6pin+8pin - under high stress load - Cayman at ~200-220 watts with a 225 watt TDP, and GTX480 at ~230-260 watts with a 250 watt TDP. I cannot be sure if Cayman is 384bit-1.5gb or 256bit-1gb/2gb; still hoping for 384bit. Theres a GPUz shot showing 6718 with 1250mhz memory and another showing it with 1600mhz memory... hehe.

Heat and noise would depend somewhat on the size & build quality of the chamber & heatsink blades, and the style & speed of the fan... but it looks quite similar to the 5870's cooler.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
however I could still see the GTX480 beating Cayman in a few benches where Ati's current DX11 architecture is struggling such as Metro in DX11, Borderlands, or Farcry2 with 4x/8x AA, and of course all the Phys-X games like Mafia2, Batman, and others when setting physics on high.

Fixed. These aforementioned games run better on NV not because of drivers alone, but because of the architecture, just like some games run better on ATI hardware (BF:BC2, Crysis, etc.) Games like Metro 2033 and STALKER:CoP w/ God Rays and tessellation benefit from NV's more modern DX11 design (i.e., tessellation engines, GigaThread scheduler, and specifically NVIDIA GF100 series GPUs implements DirectX-11 four-offset Gather4 in hardware, greatly accelerating shadow mapping, ambient occlusion, and post processing algorithms. ).

One of the major weaknesses in HD5000 series is DX11 performance (add to the games above Battleforge, Dirt 2, Just Cause 2, Lost Planet). With HD6000 series ATI is probably going to make massive improvements in all of these areas (SSAO, tessellation, DOF, geometry/particle performance, etc.) simply because it's going to be a new architecture catered specifically for DX11.

Also, I doubt that PhysX is going to run any faster on ATI cards (not that it matters cuz currently PhysX stinks).
 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Fixed. These aforementioned games run better on NV not because of drivers alone, but because of the architecture, just like some games run better on ATI hardware (BF:BC2, Crysis, etc.) Games like Metro 2033 and STALKER:CoP w/ God Rays and tessellation benefit from NV's more modern DX11 design (i.e., tessellation engines, GigaThread scheduler, and specifically NVIDIA GF100 series GPUs implements DirectX-11 four-offset Gather4 in hardware, greatly accelerating shadow mapping, ambient occlusion, and post processing algorithms. ).

One of the major weaknesses in HD5000 series is DX11 performance (add to the games above Battleforge, Dirt 2, Just Cause 2, Lost Planet). With HD6000 series ATI is probably going to make massive improvements in all of these areas (SSAO, tessellation, DOF, geometry/particle performance, etc.) simply because it's going to be a new architecture catered specifically for DX11.

Also, I doubt that PhysX is going to run any faster on ATI cards (not that it matters cuz currently PhysX stinks).

If this leaked bench was accurate the 6870 is faster than a 480 in tesselation as you predict.

uniy.jpg
 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
grooveriding: any way you can redo your crysis bench from earlier in this thread on a single 480 with 64bit checked? Thanks
 

Will Robinson

Golden Member
Dec 19, 2009
1,408
0
0
^This would be my guess. Cayman XT 10-15% faster than a GTX480, 5-10% faster than a 'GTX485'. Cayman XT might be within 5-10% trailing the 5970 and might pass it sometimes (poor x-fire performance, tessellation), however I could still see the GTX480 beating Cayman in a few benches where Ati's driver support is struggling such as Metro in DX11, Borderlands, or Farcry2 with 4x/8x AA, and of course all the Phys-X games like Mafia2, Batman, and others when setting physics on high. Although, in total, having an average lead of 10-15% over the 480 across the board. With 6pin+8pin - under high stress load - Cayman at ~200-220 watts with a 225 watt TDP, and GTX480 at ~230-260 watts with a 250 watt TDP. I cannot be sure if Cayman is 384bit-1.5gb or 256bit-1gb/2gb; still hoping for 384bit. Theres a GPUz shot showing 6718 with 1250mhz memory and another showing it with 1600mhz memory... hehe.

Heat and noise would depend somewhat on the size & build quality of the chamber & heatsink blades, and the style & speed of the fan... but it looks quite similar to the 5870's cooler.
powerload.gif


Umm..don't you mean GTX480 pulling over 400w at full load?:confused:
 
Last edited:

toyota

Lifer
Apr 15, 2001
12,957
1
0
Umm..don't you mean GTX480 pulling over 400w at full load?:confused:
that chart is for TOTAL system wattage. a gtx480 pulls a little around 250-275 watts by itself. didn't the fact that 480 sli is pulling 274 watts more than the single 480 give you a clue? lol
 
Feb 19, 2009
10,457
10
76
NV's definition of TDP is "average" load. ATI's list their GPU TDP as "near max" load.

The gtx480 is consistently drawing more power than the 5970, which has a rated TDP of 300W (which it actually reaches in furmark and crysis).

Cayman will continue the cool and quiet trend set by the 5800 GPUs, so no worries in that department.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
(i.e., tessellation engines,

Tessellator on nVidia hardware is simply a smarter approach than the single Tessellator in AMD hardware.

GigaThread scheduler,

Don't fall with such marketing gimmicks, its like stating that the HD 5870 is faster than the HD 4870 because it is based on the second generation of the Terascale engine.

and specifically NVIDIA GF100 series GPUs implements DirectX-11 four-offset Gather4 in hardware, greatly accelerating shadow mapping, ambient occlusion, and post processing algorithms. ).

AMD was first with hardware support such techniques for shadow mapping acceleration known at that time Fetch4 which was introducted in the X1K architecture. Such feature wasn't supported on nVidia hardware before Fermi (You could use PFC way back from the geForce 6x00 series)

One of the major weaknesses in HD5000 series is DX11 performance (add to the games above Battleforge, Dirt 2, Just Cause 2, Lost Planet). With HD6000 series ATI is probably going to make massive improvements in all of these areas (SSAO, tessellation, DOF, geometry/particle performance, etc.) simply because it's going to be a new architecture catered specifically for DX11.

DX11 is just a superset of DX10.1, why Evergreen would be weak for DX11? :confused: Only its Tessellation performance is weaker than nVidia's solution and since most games have a balanced workout, it doesn't make a noticeable performance difference between nVidia and AMD solutions. Tessellation performance is just one part of the overall rendering pipeline. AMD has superior Texture Unit performance and shader math. nVidia's powerfull Tessellation performance alone isn't enough to make the GTX 470 to distance itself greatly from the HD 5870 in real games like Metro 2033 or Dirt 2 (Considering the claim that the GTX 470 is almost 8 times faster in Tessellation tests compared to the HD 5870, not even Unigine shows such gains)

Also, I doubt that PhysX is going to run any faster on ATI cards (not that it matters cuz currently PhysX stinks).

Considering that is hard to keep fed the very wide Vec5 shader architecture of current AMD hardware, means that; PhysX would run great thanks to the additional resources not used on the Vec5 design, or it will run like crap for the fact that is a bit hard to exploit because it isn't parallel enough and will fight for execution resources, but I think that's the former since PhysX is about parallelism.
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
5,184
5,580
136
What if AMD's intention is to close all pricing gaps and if there are 3 large shader modules as speculated by some, can we arrive at this?

For example and assuming crossfire scaling improves.


Antilles 1 150% (2 cayman)

Antilles 2 120% (2 barts)

Cayman is taken as 100% and 85%

Barts 67% and 50%

Turks 33% and 25%

Caicos ???
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
Why do the 2's on that screen shot have different fonts?

I think it has to do with the AMD marketing groups agreement with the dudes leaking the information. They told them the following " We don't care if you leak the information as long as you understate the performance " :)