• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

R520 (ATI next gen) delayed

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: ronnn
Originally posted by: Rollo
Apoppin':

and Rollo, you are gonna be in nVidia "heaven' for another 6 months
(my prediction)

What if I get G70 SLI? ;)

What? In your earlier reply you suggested you already had next gen. So I assumed that you had knowledge about the g70 being a simple speed bump. So under NDA or just not sharing?

By "having next gen now" I meant if it's true the R520 is slower than 6800U SLI, it's likely I have "R520 level performance" now with 6800GT SLI. (as it's a little slower than 680U SLI)

From your lips to nV's ears though- if they need any "gaming devotees" to beta test G70s- I'd be all about that! :)

 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: trinibwoy
Originally posted by: Rollo
Pete, I'm only talking about nV's AFR. If you have no duplication of geometry calculations, and are only rendering every other screen, you have no duplication of "work"done by the cards and half work being done by each card.

To me this is "efficiency"- no duplication of effort, reduced workload per card.

I believe most of the popular titles are using AFR, with Far Cry a notable exception.

This actually raises a good point. With a sufficiently fast CPU, AFR on ATI's solution is going to be slower with two cards than with one, if the slower card (e.g. built-in mobo gpu) is significantly slower. If the CPU is able to provide two frames of data faster than the slower card can process a single frame then the result is going to be slower than the faster card on its own.

On the other hand, if ATI does not plan to support AFR at all then I'm guessing it's going to be a big blow against them since some games and benchmarks are quite AFR friendly.

To me this confirms what I've thought all along:
Using your X800XT/X850XT sounds good, but I don't think many people are going to buy one R520 for $600, a AMR mother board and psu, and then say,"Well, I'll save a few buck by keeping my last gen card because I like the idea of spending all this cash to have a lot of the performance of the R520 go to waste."

 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Rollo
Originally posted by: trinibwoy
Originally posted by: Rollo
Pete, I'm only talking about nV's AFR. If you have no duplication of geometry calculations, and are only rendering every other screen, you have no duplication of "work"done by the cards and half work being done by each card.

To me this is "efficiency"- no duplication of effort, reduced workload per card.

I believe most of the popular titles are using AFR, with Far Cry a notable exception.

This actually raises a good point. With a sufficiently fast CPU, AFR on ATI's solution is going to be slower with two cards than with one, if the slower card (e.g. built-in mobo gpu) is significantly slower. If the CPU is able to provide two frames of data faster than the slower card can process a single frame then the result is going to be slower than the faster card on its own.

On the other hand, if ATI does not plan to support AFR at all then I'm guessing it's going to be a big blow against them since some games and benchmarks are quite AFR friendly.

To me this confirms what I've thought all along:
Using your X800XT/X850XT sounds good, but I don't think many people are going to buy one R520 for $600, a AMR mother board and psu, and then say,"Well, I'll save a few buck by keeping my last gen card because I like the idea of spending all this cash to have a lot of the performance of the R520 go to waste."

So far, I've seen nothing but speculation, how does that confirm anything? And who said AMR will use the AFR method at all? It does seem that having 2 identical cards is necessary to do AFR effectively, but I've heard nothing about AFR being used at all by Ati. In fact, I don't think either SLI or AMR are good solutions, because a good solution will be able to assign different tasks to each card, not just render different parts of the same screen. For example, the ideal solution would be able to use the newer card to display all the output, and then use the older card to help out with things like geometry transformations, shaders, and additional memory. AFAIK, neither SLI nor AMR are capable of doing this.

Nvidia chose the simplest path, and require 2 identical cards. That's good for reducing possible problems, but Ati was more adventurous and theoretically will allows different cards to work together, which will allow greater flexibility and value than SLI if it works out as planned. Anyway, nobody can make an accurate statement about it until we see it in action.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Rollo, a 50/50 split isn't any more efficient than a 70/30 split in terms of extracting the most performance from each card. (Edit: whether supertiling is less efficient than SFR we have yet to see. But the option to pair an older card with a new one is interesting.)

Looking at the most demanding situations now, SM3 w/HDR- it seems to me pairing a next gen ATi part with a current gen would be an extremely inefficient choice.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: munky
So far, I've seen nothing but speculation, how does that confirm anything? And who said AMR will use the AFR method at all?
If they don't, their solution will likely be slower than nVidia's due to the duplication of geometry processing. "Slower" isn't the word you want to hear when you drop the coin for 2 vid cards, a multi PCIE graphics mobo, and a beefed psu.

It does seem that having 2 identical cards is necessary to do AFR effectively, but I've heard nothing about AFR being used at all by Ati. In fact, I don't think either SLI or AMR are good solutions, because a good solution will be able to assign different tasks to each card, not just render different parts of the same screen. For example, the ideal solution would be able to use the newer card to display all the output, and then use the older card to help out with things like geometry transformations, shaders, and additional memory. AFAIK, neither SLI nor AMR are capable of doing this.
Why/how would this be better than having two matched latest gen cards rendering every other screen? I don't think you thought this through- I don't think there's any way a next gen card plus a current gen card could beat two next gen cards doing half the processing each?


Nvidia chose the simplest path, and require 2 identical cards. That's good for reducing possible problems, but Ati was more adventurous and theoretically will allows different cards to work together, which will allow greater flexibility and value than SLI if it works out as planned. Anyway, nobody can make an accurate statement about it until we see it in action.
I don't see the "value" of slowing down a new card with an old card personally, but for the man that thinks spending $500+ on a R520, $200 on a new AMR board, and maybe a new psu and wants to save a few hundred (and lose performance) by not selling his X850 and buying a R520, I guess you have a point.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Rollo
Originally posted by: munky
So far, I've seen nothing but speculation, how does that confirm anything? And who said AMR will use the AFR method at all?
If they don't, their solution will likely be slower than nVidia's due to the duplication of geometry processing. "Slower" isn't the word you want to hear when you drop the coin for 2 vid cards, a multi PCIE graphics mobo, and a beefed psu.

It does seem that having 2 identical cards is necessary to do AFR effectively, but I've heard nothing about AFR being used at all by Ati. In fact, I don't think either SLI or AMR are good solutions, because a good solution will be able to assign different tasks to each card, not just render different parts of the same screen. For example, the ideal solution would be able to use the newer card to display all the output, and then use the older card to help out with things like geometry transformations, shaders, and additional memory. AFAIK, neither SLI nor AMR are capable of doing this.
Why/how would this be better than having two matched latest gen cards rendering every other screen? I don't think you thought this through- I don't think there's any way a next gen card plus a current gen card could beat two next gen cards doing half the processing each?


Nvidia chose the simplest path, and require 2 identical cards. That's good for reducing possible problems, but Ati was more adventurous and theoretically will allows different cards to work together, which will allow greater flexibility and value than SLI if it works out as planned. Anyway, nobody can make an accurate statement about it until we see it in action.
I don't see the "value" of slowing down a new card with an old card personally, but for the man that thinks spending $500+ on a R520, $200 on a new AMR board, and maybe a new psu and wants to save a few hundred (and lose performance) by not selling his X850 and buying a R520, I guess you have a point.

Well, of course 2 next gen cards are better than 1 next gen and 1 current gen. It would be nice though if they could make the 1 old + 1 new card setup faster than just 1 new card by itself. If it wasn't possible they probably would not waste time and money researching the technology. Or, they might just announce at the last minute that you really need 2 identical cards, and save themselves some potential problems.
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Originally posted by: Rollo
Originally posted by: ronnn
Originally posted by: Rollo
Apoppin':

and Rollo, you are gonna be in nVidia "heaven' for another 6 months
(my prediction)

What if I get G70 SLI? ;)

What? In your earlier reply you suggested you already had next gen. So I assumed that you had knowledge about the g70 being a simple speed bump. So under NDA or just not sharing?

By "having next gen now" I meant if it's true the R520 is slower than 6800U SLI, it's likely I have "R520 level performance" now with 6800GT SLI. (as it's a little slower than 680U SLI)

From your lips to nV's ears though- if they need any "gaming devotees" to beta test G70s- I'd be all about that! :)
Anyways by definition the only way you can have next gen performance is by having next gen. Unless of course 6800gt sli is as fast, with as many features as ATI's and Nvidia's next offerings. But why would Nvidia release something slower than their budget super expensive high end? Anyways I am on record as saying that the G70 will be better than the r520 and I am 100% confident that you will prefer the G70. That said, I think both cards will be a significant improvement over current offerings. :beer:
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
You know something I don't about G70s and R520s apparently Ronnn.
I'll have to wait till reviewers publish (or I purchase) to form my opinion.
 

imported_Noob

Senior member
Dec 4, 2004
812
0
0
Originally posted by: ronnn
Originally posted by: Rollo
Originally posted by: ronnn
Originally posted by: Rollo
Apoppin':

and Rollo, you are gonna be in nVidia "heaven' for another 6 months
(my prediction)

What if I get G70 SLI? ;)

What? In your earlier reply you suggested you already had next gen. So I assumed that you had knowledge about the g70 being a simple speed bump. So under NDA or just not sharing?

By "having next gen now" I meant if it's true the R520 is slower than 6800U SLI, it's likely I have "R520 level performance" now with 6800GT SLI. (as it's a little slower than 680U SLI)

From your lips to nV's ears though- if they need any "gaming devotees" to beta test G70s- I'd be all about that! :)
Anyways I am on record as saying that the G70 will be better than the r520 and I am 100% confident that you will prefer the G70.

Why so, because of the naming it G70 instead of N70, and/or because it's being kept secret? I'm just curious?
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Actually I know only what I read as I have no friends with either company. I did assume from your comment about 6800gt sli being next gen, that you did know something from your sources with Nvidia. At least that a single g70 will not be an improvement over the 6800gt sli. My feeling from beyond3d was that the g70 will be somewhat better than the r520, but of course we shall see.
Has been posted there that farcry 64 patch has sm 3.0 support for new Ati hardware - so at least ubisoft has one. :beer:
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Noob
Originally posted by: ronnn
Originally posted by: Rollo
Originally posted by: ronnn
Originally posted by: Rollo
Apoppin':

and Rollo, you are gonna be in nVidia "heaven' for another 6 months
(my prediction)

What if I get G70 SLI? ;)

What? In your earlier reply you suggested you already had next gen. So I assumed that you had knowledge about the g70 being a simple speed bump. So under NDA or just not sharing?

By "having next gen now" I meant if it's true the R520 is slower than 6800U SLI, it's likely I have "R520 level performance" now with 6800GT SLI. (as it's a little slower than 680U SLI)

From your lips to nV's ears though- if they need any "gaming devotees" to beta test G70s- I'd be all about that! :)
Anyways I am on record as saying that the G70 will be better than the r520 and I am 100% confident that you will prefer the G70.

Why so, because of the naming it G70 instead of N70, and/or because it's being kept secret? I'm just curious?

No, because nvidia is 6 letters, and ati is 3. 6 > 3.
:p
 

Ackmed

Diamond Member
Oct 1, 2003
8,499
560
126
Originally posted by: Rollo
You know something I don't about G70s and R520s apparently Ronnn.
I'll have to wait till reviewers publish (or I purchase) to form my opinion.


If you really cared about facts, you wouldn believe this rumor. There is no factual info to back this rumor up.
 

trinibwoy

Senior member
Apr 29, 2005
317
3
81
Originally posted by: munky
For example, the ideal solution would be able to use the newer card to display all the output, and then use the older card to help out with things like geometry transformations, shaders, and additional memory.

Whoa! :Q There are so many things wrong with this.

Geometry transformations and shaders are not things - they're almost everything.

The only use a slower card can be is to produce some portion of the final framebuffer output. Delivering the results of specific intermediate steps (like geometry transforms) is highly unlikely since the master gpu would need to be reconfigured at a hardware level to be able to accept such intermediate output. Not to mention it's highly inefficient compared to just having the faster gpu do it itself.

I'm not sure what you mean by additional memory. Both GPU's will need to have access to the same geometry and texture data so there is no 'additional memory' in a dual-GPU setup.

It's all about parallelism. And the most obvious parallel task is to have each gpu work on either a different frame or a different part of the frame / render target whether it be tiled or split. Anything else is bound to infinitely more complex and a lot less efficient.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Originally posted by: BenSkywalker
But the option to pair an older card with a new one is interesting.)
Looking at the most demanding situations now, SM3 w/HDR- it seems to me pairing a next gen ATi part with a current gen would be an extremely inefficient choice.
I don't expect ATI to allow you to SLI across generations, just across cards in the same generation.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Hmm, it appears they're at least in (some) developers' hands, judging from the Far Cry 64-bit patch change log:

Patch 1.32 change log

? Fixed a Shader Model 3.0 issue that caused graphics corruption on new ATI hardware

I was expecting a July/August launch, too, in keeping with the 18 month cycle (9700P was released 36 months ago this July/August), but I was hoping for an earlier launch simply b/c ATI has been riding SM2 since then. Maybe we'll see some new games (read: HL2's Lost Coast) demoed on R520s at E3 (a la Doom 3 on R300 oh so long ago), but with the launch in late summer. Bummer.
 

Dman877

Platinum Member
Jan 15, 2004
2,707
0
0
Originally posted by: Gamingphreek
Originally posted by: Dman877
Who the fvck cares? there isn't a game out right now that either ATI or Nvidia can't handle with ease...

SLI is only good for benchmarks at this point in time anyway...

Umm try playing SS: Chaos Theory or Far Cry or Half Life 2 or DIII with all settings and resolution maxed. That is also excluding 2048x1536 resolutions. The only way to handle those games at that quality is SLI at the moment.

-Kevin

ROFL, I don't know how long you've been playing games on a computer but it wasn't too long ago when 1024x768 with medium details was the best a state-of-the-art video card could push (IE quake3 with the original GeForce). And now you're complaining about not be able to play at 2048x1536 with 8X AA and 16XAF.... do yourself a favor, play the game and have fun, don't worry about that edge a mile away that's slightly jaggy....
 

Drayvn

Golden Member
Jun 23, 2004
1,008
0
0
Originally posted by: Pete
Hmm, it appears they're at least in (some) developers' hands, judging from the Far Cry 64-bit patch change log:

Patch 1.32 change log

? Fixed a Shader Model 3.0 issue that caused graphics corruption on new ATI hardware

I was expecting a July/August launch, too, in keeping with the 18 month cycle (9700P was released 36 months ago this July/August), but I was hoping for an earlier launch simply b/c ATI has been riding SM2 since then. Maybe we'll see some new games (read: HL2's Lost Coast) demoed on R520s at E3 (a la Doom 3 on R300 oh so long ago), but with the launch in late summer. Bummer.


OR it could mean that the SM3 for nVidia cards is causing corruption on X850 cards....
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Originally posted by: Ackmed
Originally posted by: Rollo
You know something I don't about G70s and R520s apparently Ronnn.
I'll have to wait till reviewers publish (or I purchase) to form my opinion.


If you really cared about facts, you wouldn believe this rumor. There is no factual info to back this rumor up.

Plus there is a large amount or irony here from Rollo.
He seems to know things about ATi's new cards, like what will happen in SLI before the rest of the world does.
Maybe he should wait until reviews before trying to push his thoughts out as facts.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Originally posted by: Drayvn
OR it could mean that the SM3 for nVidia cards is causing corruption on X850 cards....
I'm not sure how that would be possible, as the X8x0s use the separate SM2b path (meaning, different shaders). And the X850 is the same hardware as the X800, so it's not exactly new--if there were a bug, it would affect all X8x0 cards, and Crytek probably wouldn't have singled out SM3 and "new hardware."
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: MegaWorks
I'm going to buy ATI next Gen card just to piss Rollo off. :D

Errr, if histoy is any indication, I'll buy ATI next gen too?


Will that "piss me off"?

Remember, of the two of us, I actually bought and used a X800XT PE- the best one at that. (Asus) Not to mention every other "ATIs Best" for the last ten years.
 

gsellis

Diamond Member
Dec 4, 2003
6,061
0
0
Originally posted by: Rollo
Originally posted by: MegaWorks
I'm going to buy ATI next Gen card just to piss Rollo off. :D

Errr, if histoy is any indication, I'll buy ATI next gen too?


Will that "piss me off"?

Remember, of the two of us, I actually bought and used a X800XT PE- the best one at that. (Asus) Not to mention every other "ATIs Best" for the last ten years.

Can I be the first in the line to buy your 520 "used" when the 7800U SLI2's come out? :D

Ackmed - I use the Inquirer all the time for news. They are quick to print a retraction if they get it wrong. More importantly, they refuse to sign NDAs, thus they can leak information about next gen stuff or plans because they don't have to worry about what they signed. Hey, with people giving away their passwords for a Starbucks' coupon, imagine what kind of news gathering you can do with a handful of those. Remember that "art" of reading the news is learning how to determine what is bull and what might be correct. News is not science and cannot always be a True/False answer.
 
Mar 19, 2003
18,289
2
71
Originally posted by: Sithtiger
It looks like nVidia will unveil the G70 at the Computex Taipei 2005 show from May 31 to June 4. The makers did say the G70 may begin volume shipments in the latter half of the third quarter at the earliest and will retail for US$549.
http://www.digitimes.com/mobos/a20050510A6026.html

$549....that's pretty much what I expected, I guess. Seems to be the going price for a high-end video card these days anyway....I'm not so sure I want to spend 1/4 the money on a video card that I did on my car though....:p
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: SynthDude2001
Originally posted by: Sithtiger
It looks like nVidia will unveil the G70 at the Computex Taipei 2005 show from May 31 to June 4. The makers did say the G70 may begin volume shipments in the latter half of the third quarter at the earliest and will retail for US$549.
http://www.digitimes.com/mobos/a20050510A6026.html

$549....that's pretty much what I expected, I guess. Seems to be the going price for a high-end video card these days anyway....I'm not so sure I want to spend 1/4 the money on a video card that I did on my car though....:p

I'd be really surprised if the G70 or R520 sold for $549 when they're launched. What with all the price gouging and limited supply, I won't believe it until I see it.