Can we assume that the 69XX is going to be on par with the 580?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

T2k

Golden Member
Feb 24, 2004
1,665
5
81
How do you figure that?

He rarely ever does that... :D

Barts can easily catch Cypress with some faster memory. Improve efficiency some more, increase clocks (900mhz?), add a bunch more shaders, improve the tessellator and you could easily reach 2x 6850 performance.
I remember a slide from AMD during the past month or so where Cayman was shown 2x the performance of my 5870 so I'm pretty relaxed.
 
Last edited:

T2k

Golden Member
Feb 24, 2004
1,665
5
81
i am not under NDA except about the specifics of upcoming AMD Cayman and Antilles cards.

i am putting the interview together today; i then follow up with Mr Ossias to clarify any points; i willl have him read it and then i will publish it by Monday.

Basically, the points i gleaned (relevant to this post) is that AMD realized that they had a long time with no DX11 competition and that they are not snoozing as they *expected* Nvidia to fix Fermi. So they have an "answer" - and it is on their own timetable; just like Nvidia is on theirs.

He made the point that AMD is confident in their own mature products and in the HD 6000 series. He also pointed out that MOST DX11 games were made using AMD HW and that they got early silicon LONG before HD 5000 series launched to the game devs. So they are not worried particularly about being "behind" in any area (obviously meaning the "practical uses" of tessellation).

AMD still has 85% of the DX11 market and still have more new GPUs to launch soon. He said their basic strategy has not changed. "We have a well-balanced product" and they target specific price points with "value".

He also outlined their LONG-TERM strategy to raise the graphics bar for everyone - forcing the devs to make better graphics in game.

There is a lot more. i have to save something for my published interview
^_^

When can we expect it? :)
 

Aristotelian

Golden Member
Jan 30, 2010
1,246
11
76
When can we expect it? :)

The article on the 580 has a bit about it at the end (am I the only one who totally appreciates the 'view all' button on alienbabeltech's reviews?):

"And AMD is also bringing out their Cayman-based highest performing single GPU video cards out shortly."

Shortly is vague, of course, but I'd be shocked if more than 3-4 weeks goes by and we get nothing.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
When can we expect it? :)

Originally Posted by apoppin
i am not under NDA except about the specifics of upcoming AMD Cayman and Antilles cards.
"shortly" is deliberately vague. According to some, Jesus is returning shortly.
():)

And i am SO far behind, i got to get back to work. i got 8-1/2 hours sleep last night .. more than my combined total in the last 3 days. It is so annoying to have these silly limits imposed on oneself.
:D ... C-ya!
 
Last edited:

Makaveli

Diamond Member
Feb 8, 2002
4,801
1,265
136
Obviously but if they have played their cards they still have ample advantage from their superior DX11 execution - which you can witness when latest-greatest GTX580 is only ~15% faster than my year-old 5870... :)

If they won't launch before December then I expect 6970 to beat 580 comfortably.

Less make that alittle more factual.

That would be 20-30% faster.
 

Absolution75

Senior member
Dec 3, 2007
983
3
81
Without reading the entire thread -

going to say no, nvidia tends to hold the very top high end (generalization)


this thread should be a poll =p
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Without reading the entire thread -

going to say no, nvidia tends to hold the very top high end (generalization)


this thread should be a poll =p


Nvidia launched the 480 in 2010 March 26th.
Now 7months or so lateron, they have a new product thats 15% faster.

ATI Launched the 5870 in 2009 November 18th.
Now 12months or so lateron, they have a new product thats XX% faster.

Now if they manage to get about 25% faster than the 5870, they ll go even with the 580 I think. The question is have they gotten more than 25%? They *might* have.
 

Vdubchaos

Lifer
Nov 11, 2009
10,408
10
0
Nvidia launched the 480 in 2010 March 26th.
Now 7months or so lateron, they have a new product thats 15% faster.

ATI Launched the 5870 in 2009 November 18th.
Now 12months or so lateron, they have a new product thats XX% faster.

Now if they manage to get about 25% faster than the 5870, they ll go even with the 580 I think. The question is have they gotten more than 25%? They *might* have.

hehe

even if it's 50% more.......games will only look run 1-5% better IF that.

Just because card is XX% better, doesn't really mean a thing. Once you get over "playable" level on ANY game (at same settings) extra performance is worthless
 

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
hehe

even if it's 50% more.......games will only look run 1-5% better IF that.

Just because card is XX% better, doesn't really mean a thing. Once you get over "playable" level on ANY game (at same settings) extra performance is worthless

That is a fine and dandy argument for why most folk should not get a high end card.

However, if a card is 20% faster than another, useful or not, that will give it a MASSIVE sales bonus provided the price is in line.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Overall and HD, all i really look at. We have 22% and 25%. So there you go for those who arent accountants :D

1) You are averaging 3 low resolutions from TPU (1024x768, 1280x1024, 1680x1050) from all their games that hardly anyone with an HD5870/GTX580 will use/care about.

2) You are also adding into the overall average of modern games ancient games like Call of Juarez, Dawn of War, Unreal Tournament 3 as opposed to including modern games like Just Cause 2, Lost Planet 2 and STALKER: CoP.

3) Most importantly, you are confusing "Card X is faster than Card Y by" with "Card X is slower than Card Y by". HD5870 is 25% slower on average than a GTX580 but a GTX580 is at least 33% faster (100% / 75% = 1.33).

^_^
 

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
1) You are averaging 3 low resolutions from TPU (1024x768, 1280x1024, 1680x1050) from all their games that hardly anyone with an HD5870/GTX580 will use/care about.

2) You are also adding into the overall average of modern games ancient games like Call of Juarez, Dawn of War, Unreal Tournament 3 as opposed to including modern games like Just Cause 2, Lost Planet 2 and STALKER: CoP.

3) Most importantly, you are confusing "Card X is faster than Card Y by" with "Card X is slower than Card Y by". HD5870 is 25% slower on average than a GTX580 but a GTX580 is at least 33% faster (100% / 75% = 1.33).

^_^

Math for the win..

As for the older games, while the difference between 300 and 280fps might well be pointless, it is none the less a demonstration of a faster card. Are we at a point yet were the older games are so CPU limited that the differences are muted entirely?

For every Hawx 2 there seems to be an F1 2010... I think the days of a "clear winner" when cards are so close to each other are likely behind us, for now anyway.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
As for the older games, while the difference between 300 and 280fps might well be pointless, it is none the less a demonstration of a faster card. Are we at a point yet were the older games are so CPU limited that the differences are muted entirely?

Why can't they test older games with realistic settings like SSAA and TrSS then, like Alienbabeltech does? Surely I am going to apply the highest possible visual settings in 4-year-old games (outside of Crysis and Arma series). Since a GTX260/4870 can play all older games at 4AA, what's the point of testing them? Why not include Medal of Honor 1 and Call of Duty 2 and Warcraft 3 while we are at it? :) AMD has already sacrificed texture filtering quality with HD68xx series out of the box in older games like Oblivion, the Witcher, HL2. This in itself sends a clear message that modern games are the priority for modern graphics cards.

I have no problem with older game testing if TPU would split the averages like Computerbase.de does with DX9, DX10, DX11 games, or comment on a game-by-game basis like Alienbabeltech or HardOCP do.

______________________________________________

As far as HD6970 goes, since it will most likely be a VLIW4 design, it's too hard to predict how much of a performance boost this will have on games. Also, HD6970's tessellation engine is supposed to be Gen 9 (again we don't have any information on its performance).

As Will Robinson has mentioned, pricing is also important.

Also recall HD6870 vs. HD5850. The 1st has 1120 SP x 900 mhz vs. 1440 SP x 725 mhz. The texture fill-rate and shader performance between the 2 cards is about the same. AMD realized that ROP performance was the bigger bottleneck in the design. We don't know for certain if HD6970 will stick with 32 ROPs. I think HD6970 has a strong shot of beating the GTX580 in 8AA scenarios and at 2560x1600 due to better 8AA efficiency and superior texture fill-rate. However, GTX580 will still have the edge in tessellation.
 
Last edited:

cusideabelincoln

Diamond Member
Aug 3, 2008
3,274
41
91
2) You are also adding into the overall average of modern games ancient games like Call of Juarez, Dawn of War, Unreal Tournament 3 as opposed to including modern games like Just Cause 2, Lost Planet 2 and STALKER: CoP.

Call of Juarez 2
Dawn of War 2
UT3

As long as the game isn't CPU limited then it can be used as a valid measure of the performance advantage (or disadvantage) one card has over another. The three games you mentioned are not CPU limited (at higher resolutions) in TPU's review. In fact when comparing the 580 scores to the 480 scores in these three games, the performance advantage of the 580 in these games falls right in line with the average across other games.

So do not dismiss these results simply because they are old. They don't skew the average (because they fall right in the medium). The data they bring serve the curious. For those looking for performance in those games can now know what to expect. Perhaps the gamer likes to go back and play these "older" games every once in a while. Also since TPU test so many games, the existence of a few mis-representive data points have a lower affect on the average.

Just because card is XX% better, doesn't really mean a thing. Once you get over "playable" level on ANY game (at same settings) extra performance is worthless
That depends on your definition of playable. Everybody has a different one. Also the availability of higher AA levels, higher resolutions (multi-screen gaming), and 3D gaming do require the need of even faster video cards, and most sites do not test cards under that kind of stress to see if you would get your "playable" level.
 

JM Popaleetus

Senior member
Oct 1, 2010
375
47
91
heatware.com
This thread is making me double-think myself.

eiNZy.jpg


Unless something better for $150-$180 or $350 altogether is going to come out...is there any reason why I shouldn't jump on this? Especially with how well these cards OC (which also completely rules out 6870s in CF IMHO).

I would think that setup could also get me through the next 4-5 years are so (I'm not a graphics whore, I don't mind running on low if I have to).

Decisions...decisions...
 

Teizo

Golden Member
Oct 28, 2010
1,271
31
91
I would think that setup could also get me through the next 4-5 years are so (I'm not a graphics whore, I don't mind running on low if I have to).

Decisions...decisions...
Today's high settings, which look great, are going to be tomorrow's low settings, and still look great :D
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
If you don't mind the obvious downsides to multi-gpu setups like low minimum framerates, waiting for driver updates for maximum scaling performance and less than optimal scaling in less popular games that AMD/NV don't specifically focus on, then HD6850 CF overclocked > GTX580 from a performance perspective. It is very difficult to think of a better setup than HD6850s in CF overclocked.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
@RussianSensation

the 6850 crossfire has better minimum frame rates than the 580 in almost everything it was benchmarked on (game wise) at a resolution of 1920x. (it was only at 2560x1200 that the 580 was beating it in minimum fps and that was because it had more memory, and the 1gb cards where memory bottlenecked).
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
I am no insider, nor I have any inside sources, but I think people are over optimistic about 6970. Nvidia 580 can acquire 20% increase + lower power consumption over 480 is because a)it has one more SM, and b)tweak. Although the process is the same 40nm, it actually have 10.67% more working parts in comparison. In other words, 580 is a fully working chip where 480 isn't. Now look at 5870, it is a fully working chip to begin with, 6970 will have a harder time bumping up the performance in comparison.

So 6850 has 960 SPs, 6870 has 1120 SPs, my guess of 6970 will have 1200-1300 SPs. Let say it scales really well + tweak, it should be no more than 20% over 6870, which will be roughly at 480 level. Assuming that it is power efficient, than it is a card with 480 performance, 5870 power consumption.

Try to clock it means giving up power consumption, if they can't clock it up to pair with 580, then they actually have no selling point. Also, Richard was upset about high level tessellation, indicating that 6970 isn't going to beat 480 on heavy tessellated scenarios. Having said all that, it can be priced at around 300-350 as a powerful card that have better performance/watt ratio.

Look at 580 for a second. Nvidia is brave enough to put a limit on OC on benchmarks. It appears that they are pretty sure about its performance over its competitors. 480 isn't the greatest build and it is very hot, yet Nvidia didn't put things like that on it, so why now? What is more dangerous? Having user kill their card during OC or having their competitor win and crazy fanboy nerds can't do anything about it?

Now I don't know how many SPs are there, so I can be very off. (OMFG 2000SPs!!)
 

ShadowOfMyself

Diamond Member
Jun 22, 2006
4,227
2
0
So 6850 has 960 SPs, 6870 has 1120 SPs, my guess of 6970 will have 1200-1300 SPs. Let say it scales really well + tweak, it should be no more than 20% over 6870, which will be roughly at 480 level. Assuming that it is power efficient, than it is a card with 480 performance, 5870 power consumption.

20%, seriously? When was the last time a high end card was only 20% faster than the mid range one?

Take a look at how the 5870 compares to the 5770:
http://www.anandtech.com/bench/Product/162?vs=172

In most games its twice as fast... Now obviously I dont think thats gonna happen here, but the point stands
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,979
589
126
Take a look at how the 6870 compares to the 5870

There are all being built under 40nm.
Two things. the 6870 is a mid range card which does not replace the 5870. And the 6970 is a different GPU, it's not only a pumped up 5870.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
I am no insider, nor I have any inside sources, but I think people are over optimistic about 6970. Nvidia 580 can acquire 20% increase + lower power consumption over 480 is because a)it has one more SM, and b)tweak. Although the process is the same 40nm, it actually have 10.67% more working parts in comparison. In other words, 580 is a fully working chip where 480 isn't. Now look at 5870, it is a fully working chip to begin with, 6970 will have a harder time bumping up the performance in comparison.

So 6850 has 960 SPs, 6870 has 1120 SPs, my guess of 6970 will have 1200-1300 SPs. Let say it scales really well + tweak, it should be no more than 20% over 6870, which will be roughly at 480 level. Assuming that it is power efficient, than it is a card with 480 performance, 5870 power consumption.

Try to clock it means giving up power consumption, if they can't clock it up to pair with 580, then they actually have no selling point. Also, Richard was upset about high level tessellation, indicating that 6970 isn't going to beat 480 on heavy tessellated scenarios. Having said all that, it can be priced at around 300-350 as a powerful card that have better performance/watt ratio.

Look at 580 for a second. Nvidia is brave enough to put a limit on OC on benchmarks. It appears that they are pretty sure about its performance over its competitors. 480 isn't the greatest build and it is very hot, yet Nvidia didn't put things like that on it, so why now? What is more dangerous? Having user kill their card during OC or having their competitor win and crazy fanboy nerds can't do anything about it?

Now I don't know how many SPs are there, so I can be very off. (OMFG 2000SPs!!)

You have tom remember that Barts and Cayman are NOT the same arch.