any predictions on R520 vs G70

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Concillian

Diamond Member
May 26, 2004
3,751
8
81
I don't know, but I think we can tell this is going to be no GF4--> GF FX or R9700 --> R9800 type of generational difference. This is going to be a big leap. I just hope it trickles down to cards we can afford ($150-300).

My guess is that the G70 will edge out the R520, if for no other reason than there is more brand new tech. with the R520 and the G70 is more of an evolutionary product from what nVidia is making now, while the ATi product will be a larger change. It's impossible to tell though, as the 9500/9700 series was a pretty large departure from the 8500 type of cards.
 

Sentential

Senior member
Feb 28, 2005
677
0
0
I am going to make a reasonable prediction based on facts, not crap. So lets take it from the top:

Both will use the fastest available DDR3 from samsung. That currentally is at 700mhz unbuffered, or 1400mhz DDR. This is confirmed by both the Xbox360 and PS3 specs.

Second it will not be 512bit. Samsung does not produce such nor does it have the capacity to do so. Now onto the GPUs themselves.

From the Xbox360 specs they list the R500 core with 48 pipes. This can be read a number of ways. ATi had talked about releaseing a videocard with a unified pipeline for quite some time. It IS possible that this might be a 48 X 1 piped card. However it is just as likely that it is a 32 X 1 / 16 X 1 (PS/VS).

We also know it is a 90nm core. However from the Xbox2 specs we have learned that its GPU speed there is 500. Since x800s are known for very high clock speeds and much lower with alot of pipes enabled (aka x800PRO VIVO) It can be assumed that the R500 will have atleast 32 pipelines.

Now onto the G70. We know that the G70 does not use 90nm. It uses either 100nm or 110nm. By that right and the rumors that "it runs very cool with a single slot cooling device" it cannot have 32 pipelines.

This is doubly supported by its higher GPU clocks of 550 which the PS3 lists. With all likely hood the G70 is either 24 X 1 or is 16 X 2 for its pixel shader pipes.

So from this information we have a clear picture of what they will look like (give or take)

ATi R500
GPU Clocks: 500~650mhz
Pixel Pipes: 32 (or 48 unified)
Vertex Pipes: 16 (or 48 unified)
Die Process: 90nm
RAM: Samsung GC16 / GC14 (1200mhz/1400mhz)

nVidia G70
GPU clocks: 550~700mhz
Pixel Pipes: 24 (or 16X2)
Vertex Pipes: 12 (or 16)
Die Process: 110nm / 100nm
RAM: Samsung GC16 / GC14 (1200mhz/1400mhz)
 

Ronin

Diamond Member
Mar 3, 2001
4,563
1
0
server.counter-strike.net
Originally posted by: Falloutboy
Originally posted by: BouZouki
R520 will have 45 pipe lines and clocked at 800/1.8 ghz GDD4.

SM 4 support.

HDR 2.0 support. Allows AA.

512MB 512 bit memory bus for maximum aa support.

AMR will be much more efficient than SLi and double your performance when a second card is added.

And you know all this how??


He doesn't, and he's blowing smoke up your ass. That, and if you look at what he said, you'd realize he didn't have a clue to begin with, so he made up random crap to post.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Originally posted by: Sentential
I am going to make a reasonable prediction based on facts, not crap. So lets take it from the top:

Both will use the fastest available DDR3 from samsung. That currentally is at 700mhz unbuffered, or 1400mhz DDR. This is confirmed by both the Xbox360 and PS3 specs.

Second it will not be 512bit. Samsung does not produce such nor does it have the capacity to do so. Now onto the GPUs themselves.

From the Xbox360 specs they list the R500 core with 48 pipes. This can be read a number of ways. ATi had talked about releaseing a videocard with a unified pipeline for quite some time. It IS possible that this might be a 48 X 1 piped card. However it is just as likely that it is a 32 X 1 / 16 X 1 (PS/VS).

We also know it is a 90nm core. However from the Xbox2 specs we have learned that its GPU speed there is 500. Since x800s are known for very high clock speeds and much lower with alot of pipes enabled (aka x800PRO VIVO) It can be assumed that the R500 will have atleast 32 pipelines.

Now onto the G70. We know that the G70 does not use 90nm. It uses either 100nm or 110nm. By that right and the rumors that "it runs very cool with a single slot cooling device" it cannot have 32 pipelines.

This is doubly supported by its higher GPU clocks of 550 which the PS3 lists. With all likely hood the G70 is either 24 X 1 or is 16 X 2 for its pixel shader pipes.

So from this information we have a clear picture of what they will look like (give or take)

ATi R500
GPU Clocks: 500~650mhz
Pixel Pipes: 32 (or 48 unified)
Vertex Pipes: 16 (or 48 unified)
Die Process: 90nm
RAM: Samsung GC16 / GC14 (1200mhz/1400mhz)

nVidia G70
GPU clocks: 550~700mhz
Pixel Pipes: 24 (or 16X2)
Vertex Pipes: 12 (or 16)
Die Process: 110nm / 100nm
RAM: Samsung GC16 / GC14 (1200mhz/1400mhz)

Nvidia is not going to release a solution that is completely outclassed. What ever ATI does Nvidia will make sure to match it (and vice versa) (ie: Nvidia is not going to tout a 24 pipeline card as the best when ATI's is 32 pipelines).

My guess is that the G70 will edge out the R520, if for no other reason than there is more brand new tech.

Sorry but to end this right now, Nvidia has already officially stated that this architecture will be completely new; not based on ANY previous. So i would count that out.

Can someone explain to me unified shaders? Is it merely the shaders being programmed to what the game needs?

-Kevin
 

Sentential

Senior member
Feb 28, 2005
677
0
0
Dont be so sure. 24 pipes with a very high fillrate will do just fine against the R500. Plus at 90nm there are serious concerns of both heat and yeild. It might be a matter if they can actually make it.

_______________________

This whole unified shader thing is just that. They lump all the pipes together and they are balanced accordingly. Almost like SLI in a sense but not exactally. There is an internal BIOS type devices that load-balances the need of the GPU.

I also forgot to mention that the R500 will have some sort of cache. It is some sort of high-speed buffer that will allow Anti-alaising and the like to take no peformance hit up to a certian resolution. (I forget what, I belive its 1200/1076)
 

ddogg

Golden Member
May 4, 2005
1,864
361
136
Originally posted by: Gamingphreek
Originally posted by: Sentential
I am going to make a reasonable prediction based on facts, not crap. So lets take it from the top:

Both will use the fastest available DDR3 from samsung. That currentally is at 700mhz unbuffered, or 1400mhz DDR. This is confirmed by both the Xbox360 and PS3 specs.

Second it will not be 512bit. Samsung does not produce such nor does it have the capacity to do so. Now onto the GPUs themselves.

From the Xbox360 specs they list the R500 core with 48 pipes. This can be read a number of ways. ATi had talked about releaseing a videocard with a unified pipeline for quite some time. It IS possible that this might be a 48 X 1 piped card. However it is just as likely that it is a 32 X 1 / 16 X 1 (PS/VS).

We also know it is a 90nm core. However from the Xbox2 specs we have learned that its GPU speed there is 500. Since x800s are known for very high clock speeds and much lower with alot of pipes enabled (aka x800PRO VIVO) It can be assumed that the R500 will have atleast 32 pipelines.

Now onto the G70. We know that the G70 does not use 90nm. It uses either 100nm or 110nm. By that right and the rumors that "it runs very cool with a single slot cooling device" it cannot have 32 pipelines.

This is doubly supported by its higher GPU clocks of 550 which the PS3 lists. With all likely hood the G70 is either 24 X 1 or is 16 X 2 for its pixel shader pipes.

So from this information we have a clear picture of what they will look like (give or take)

ATi R500
GPU Clocks: 500~650mhz
Pixel Pipes: 32 (or 48 unified)
Vertex Pipes: 16 (or 48 unified)
Die Process: 90nm
RAM: Samsung GC16 / GC14 (1200mhz/1400mhz)

nVidia G70
GPU clocks: 550~700mhz
Pixel Pipes: 24 (or 16X2)
Vertex Pipes: 12 (or 16)
Die Process: 110nm / 100nm
RAM: Samsung GC16 / GC14 (1200mhz/1400mhz)

Nvidia is not going to release a solution that is completely outclassed. What ever ATI does Nvidia will make sure to match it (and vice versa) (ie: Nvidia is not going to tout a 24 pipeline card as the best when ATI's is 32 pipelines).

My guess is that the G70 will edge out the R520, if for no other reason than there is more brand new tech.

Sorry but to end this right now, Nvidia has already officially stated that this architecture will be completely new; not based on ANY previous. So i would count that out.

Can someone explain to me unified shaders? Is it merely the shaders being programmed to what the game needs?

-Kevin

geez...the R500 in the PS3 is not 48 pipes, it is 48 ALUS!!!!!! in full form (Arithmetic Logic Units) The shader core has 48 Arithmetic Logic Units (ALUs) that can execute 64 simultaneous threads on groups of 64 vertices or pixels. ALUs are automatically and dynamically assigned to either pixel or vertex processing depending on load. The ALUs can each perform one vector and one scalar operation per clock cycle, for a total of 96 shader operations per clock cycle. Texture loads can be done in parallel to ALU operations.
EDIT: if ull do a little research it is shown that the R500 has 24 pipelines with 2 ALUS each therefore the 48 ALUs that everyone here has confused for 48 pipelines!
Info here
hope this clears the misled and stupid information that the R500 has 48pipes!!
THEREFORE it is pretty likely that the R520 will have 24pipelines to start with and maybe later upto 32(highly unlikely though!!)

and according to the initial review by anandtech the PS3 GPU is able to produce upto 2Teraflops in comparison to 1Teraflop by the R500 in the Xbox360 so its seems a little clearer now which GPU is more powerful
 

ddogg

Golden Member
May 4, 2005
1,864
361
136
Originally posted by: BouZouki
R520 will have 45 pipe lines and clocked at 800/1.8 ghz GDD4.

SM 4 support.

HDR 2.0 support. Allows AA.

512MB 512 bit memory bus for maximum aa support.

AMR will be much more efficient than SLi and double your performance when a second card is added.

looks like bouzoki has blown a fuse...!!! he must be on some drugs, acting real strange the past 10 posts or so. better stop posting for a while before u further make a fool of urself!!:D
 

ddogg

Golden Member
May 4, 2005
1,864
361
136
Originally posted by: Ronin
Originally posted by: Falloutboy
Originally posted by: BouZouki
R520 will have 45 pipe lines and clocked at 800/1.8 ghz GDD4.

SM 4 support.

HDR 2.0 support. Allows AA.

512MB 512 bit memory bus for maximum aa support.

AMR will be much more efficient than SLi and double your performance when a second card is added.

And you know all this how??


He doesn't, and he's blowing smoke up your ass. That, and if you look at what he said, you'd realize he didn't have a clue to begin with, so he made up random crap to post.

LOL
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: Insomniak
Originally posted by: AnnihilatorX
Go to gamespot E3 report
http://www.gamespot.com/e3/index.html

I HIGHLY recommend watching the KillZone Video footage
It's totally unbelievable that it's not motion pictures


Jesus Christ, you're right - that made final fantasy look quaint.

I hope it gets ported to PC so a mouse can do it justice. I'm not looking forward to trying to play that with a that Batarang PS3 controller.

LMAO!!!!!!!!! I agree. That's the main reason I stay away from consoles. Because of the clumsy "batarangs". :D

 

Ackmed

Diamond Member
Oct 1, 2003
8,499
560
126
How can you have an "educated guess", based on nothing but rumors?

I dont know which will be faster, I hope they are deadlocked in every benchmark. Hopefully it will make them try harder.
 

biostud

Lifer
Feb 27, 2003
19,952
7,049
136
Originally posted by: ddogg

geez...the R500 in the PS3 is not 48 pipes, it is 48 ALUS!!!!!!

Basically the R500 is used in the Xbox 360, not the PS3 :p

the R500 is ~150M transistors (which is less than nv40's 222M)
the nVidia RSX for PS3 will be ~300M transistors

So, for both G70 and R520 to double the speed of current graphics I would think it would take a bit more transistors than the one in XboX 360, even though it's a new design.


 

Sentential

Senior member
Feb 28, 2005
677
0
0
Originally posted by: Sentential
Dont be so sure. 24 pipes with a very high fillrate will do just fine against the R500.


Vr-Zone/HTKPC:

The G70 should be made at the great (well small) 0.11 micron fabrication processes fro TSMC and will at the least have 8 more pixel pipelines compared to the current fastest product, yes indeed 24 pixel pipelines (yummie!), all that is clocked at roughly 430 MHz. That's 10 Gigapixels per second and thus 60% more than the 6800 Ultra. Memory is supposed to be 256 MB GDDR3 memory clocked at 1.4 GHz.

0.11 micron process TSMC
430Mhz core / 1.4GHz 256MB GDDR3 memory
256-bit memory interface
38.4GB/s memory bandwidth
10.32Bps Fill Rate
860M vertices/second
24 pixels per clock
400MHz RAMDACs
NVIDIA CineFX 4.0 engine
Intellisample 4.0 technology
64-bit FP texture filtering & blending
NVIDIA SLI Ready (7800 GTX only)
DX 9.0 / SM 3.0 & OpenGL 2.0 supported
G70 comes with 3 models; GTX, GT and Standard
Single card requires min. 400W PSU with 12V rating of 26A
SLI configuration requires min. 500W PSU with 12V rating of 34A
Launch : 22nd of June

Hit the nail on the head. While it still is possible that the Ultra has 32 pipes I would say it is unlikely as I have said before.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Given that G70 on PS3 runs at 550mhz, why should PC version run at 430?

Now since PS3 marketing team stated that it is faster than 2x6800U, 24 pipes x 550 = 13200 fillrate. 2x6800U = 2x(16x400) = 12800.

also about R500......what if it is just R520 with embedded ram onboard. Maybe it has nothing to do with R600. Ati just might suprise everyone and come out with a unified architecture with R520 no?
 

ddogg

Golden Member
May 4, 2005
1,864
361
136
Originally posted by: RussianSensation
Given that G70 on PS3 runs at 550mhz, why should PC version run at 430?

Now since PS3 marketing team stated that it is faster than 2x6800U, 24 pipes x 550 = 13200 fillrate. 2x6800U = 2x(16x400) = 12800.

also about R500......what if it is just R520 with embedded ram onboard. Maybe it has nothing to do with R600. Ati just might suprise everyone and come out with a unified architecture with R520 no?

ya and also since its on the 110nm it should be atleast 550 or even 600 if they really want to clock it high
 

Killrose

Diamond Member
Oct 26, 1999
6,230
8
81
When you look at Current games vs Current high-end card performance, even with 4xAA and 8xAf at high resolutions, we don't really need R520/G70. Maybe with HDR enabled in FarCry, but that's about it.

Don't you think these cards are a little early in comparison to a real need? :confused:
 

ddogg

Golden Member
May 4, 2005
1,864
361
136
well with games based on the U3 engine coming out we DEFINETELY require these cards...always better to have fast cards rather than slow ones
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Given that G70 on PS3 runs at 550mhz, why should PC version run at 430?

The PS3 uses a 90nm part fabbed by Sony, the PC part is a 110nm part fabbed at either TSMC or IBM.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
So assuming that the r520 will use a unified shader architecture, isn't it supposed to be more efficient and potentially faster than if it was using fixed shaders? Why do many ppl around here think unified shaders will make it slower?

Technically, there could be some overhead to the load balancing of shaders, but in general the gains of load balancing by far outweigh the overhead if implemented correctly. So explain to me why would unified shaders slow down the performance, when I think it should at least offer more flexibility, and possibly more performance?
 

imported_Ged

Member
Mar 24, 2005
135
0
0
Rather than start a new thread about Xbox360 graphics vs. PS3 graphics... I thought I'd just post here..

My impression from the videos that I have seen from E3...

Xbox360 games look like current DX9.0 games in the best cases and in the worst, like Xbox1 games. Microsoft also looks like they are throwing all the Hype and Marketing towards Xbox360 they can.

PS3 games that I saw were crazy. They looked like pre-rendered awesomeness. I'm glad that they stopped the Unreal Engine 3 Demo to show that it wasn't pre-rendered, cause it's just magical the way it looked. The F1 racing demo looked like I was watching via a camera on the SPEED channel. I forget the name of the War Game they demoed, but that was amazing as well.

Bascially, PS3 made me search for my jaw on the ground... and Xbox360 made me wish they had something better. Perhaps it's a question of how much developement has been done on the respective games, but PS3 designers only had a couple months to work out their demos. Hats off to PS3/Cell/STI/NVIDIA for doing such a good job making PS3 easy to code for. If the CELL platform is THAT easy to code for, very good things are to be expected.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: ddogg
well with games based on the U3 engine coming out we DEFINETELY require these cards...always better to have fast cards rather than slow ones

U3 is not due for another 12 to 18 months yet... So even if we definately need these cards for U3, we don't need them right now or for at least a year.
And by the way, X800's and 6800's are fast cards. ;)

 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: munky
So assuming that the r520 will use a unified shader architecture, isn't it supposed to be more efficient and potentially faster than if it was using fixed shaders? Why do many ppl around here think unified shaders will make it slower?

Technically, there could be some overhead to the load balancing of shaders, but in general the gains of load balancing by far outweigh the overhead if implemented correctly. So explain to me why would unified shaders slow down the performance, when I think it should at least offer more flexibility, and possibly more performance?

Well, does anyone here have a good definition of what unified shaders are? Links maybe?
Munky, it could just as easily be slower as could it be faster. If the idea of unified shaders is to make it more efficient, thats a plus, but it still needs to be done correctly. IMHO.
ATI could botch it just as easily as it could ACE it. We won't know for a while yet.

 

fstime

Diamond Member
Jan 18, 2004
4,382
5
81
Originally posted by: Ronin
Originally posted by: Falloutboy
Originally posted by: BouZouki
R520 will have 45 pipe lines and clocked at 800/1.8 ghz GDD4.

SM 4 support.

HDR 2.0 support. Allows AA.

512MB 512 bit memory bus for maximum aa support.

AMR will be much more efficient than SLi and double your performance when a second card is added.

And you know all this how??


He doesn't, and he's blowing smoke up your ass. That, and if you look at what he said, you'd realize he didn't have a clue to begin with, so he made up random crap to post.



YOU DONT KNOW WHAT YOUR TALKING ABOUT, I HAVE CONNECTIONS WITH ATI AND NVIDIA, MMMKAY, I GET FREE VIDEO CARDS. MMMKAY

THOSE SPECS ARE LEGIT, BECAUSE I HAVE CONNECTIONS WHICH I CANNOT NAME RIGHT NOW BECAUSE ITS A SEKRAT.

Obviously someone with enough comon sense would know I was joking.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
No idea who will win but just judging based on the rumors floating around.

The G70 looks pretty solid and will probably edge out the ATI card. As for the array of ALUs. I thought Nvidia tried and failed at this on the NV30?

I remember them touting the arch of that chip wasnt a conventional pipeline but instead an array of units that will be used as needed.

I also seem to remember they were saying they had 32 units.
We all know how that turned out dont we? :)
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Genx87
No idea who will win but just judging based on the rumors floating around.

The G70 looks pretty solid and will probably edge out the ATI card. As for the array of ALUs. I thought Nvidia tried and failed at this on the NV30?

I remember them touting the arch of that chip wasnt a conventional pipeline but instead an array of units that will be used as needed.

I also seem to remember they were saying they had 32 units.
We all know how that turned out dont we? :)

NV said it was an array of units, but we know now that it was a 4x2 architecture, which means it was 4 pipelines, with 2 texture units per pipe. Which means when you're not doing multitexturing, it's no better than a regular 4 pipe design. That's just one of the many reasons why the fx cards could not compete with r300 cards, which had 8 pipelines.

Personally, I think they only named it that way in order to confuse people and hide the truth abouth the design. There was nothing revolutionary in the actual pipelines, it was still basically a conventional design.

Also, the infamous fx5800 had a 128bit memory bus, whereas the r300 had a 256bit memory bus. So, both the gpu and the memory had to work at ridiculously high clock rates just to stay even with the r300 because they had half the pipelines and half the memory bus width. If that was the only issue, things would not be so bad, but since the fx cards had poor DX9 shader performance, that basically sealed their fate.

The whole point is, unified shaders were not present on nv30 cards, and if they were, I doubt things would have turned out much different. AFAIK, unified shaders are basically ALU's than can work either as pixel shaders or vertex shaders, thus offering more flexibility when rendering a scene. We wont know if this architecture is better or worse until the actual cards are benchmarked.