R600 News from VR-Zone.

Soccerman06

Diamond Member
Jul 29, 2004
5,830
5
81
Originally posted by: Slitherythesnake
I don't know how reliable VR-Zone is when it comes to news, but it appears they have R600 details.

I know VR-Zone are more reliable than INQ, but that doesnt say much. The specs seem to be right, considering the move to 65nm. I think they might be exaggerating the pixel/shader pipe count, but hell, they get another 7-8 months to work it out. They have been hyping up this whole architecture so it might possibly be this good.
 

moonboy403

Golden Member
Aug 18, 2004
1,828
0
76
7 to 8 months means nothing

look at the G71 by nvidia, it was quite a flop as they had 7 to 8 months themselves in making it
 

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
Originally posted by: moonboy403
7 to 8 months means nothing

look at the G71 by nvidia, it was quite a flop as they had 7 to 8 months themselves in making it


LOL...I reckon the G71 will out sell the R580...not that it means its a better card, but its certainly not a flop?!
 
Mar 11, 2004
23,444
5,851
146
G71 is far from a flop. Its served its purpose fine. The 7900s fly off the shelves as quickly as they get there, and the X1900 cards barely seem to sell at pretty large discounts. I've seen people in the FS/T forum who couldn't get rid of brand new ones for $400. I gave up trying to sell mine, and sold my 7800GT instead and am gonna use the X1900XT for a while. I might end up just sticking with it for a good long while instead of dealing with swapping cards and the like, depending on how well it fares in the long run. We'll see though.

Hmm, so if it ends up with 64 unified pipes, thats not really any more total pipes than R580, although that means that they could go 32/32, 16/48, 24/40, or whatever else they can run.

I'm trying to remember some of the earlier rumors for R600. Wasn't there talk of 96 unified pipelines or something?

If they could come up with some type of software that dynamically appropriates pipelines so that the developers don't have to specify it could really take off. The only way that I see dedicated pipelines being better is if they can add enough to overcome deficiencies, although I'm not sure how complex that would get. If nVidia managed to get 64/64 then yeah that'd be undoubtedly better, how complex would it be. If they get 40/40 with G80 I think it could do pretty well, and I don't think 32/32 would be quite up to par, but then again what do I know.
 

JBT

Lifer
Nov 28, 2001
12,094
1
81
Originally posted by: moonboy403
7 to 8 months means nothing

look at the G71 by nvidia, it was quite a flop as they had 7 to 8 months themselves in making it

lol are you serious??? I think NVida did quite well it the last two launches... Sure they are hard to find but look around the boards here they arn't that hard cause everyone seems to be buying them up.
 

MrX8503

Diamond Member
Oct 23, 2005
4,529
0
0
it also says its vista ready, dx10 ready, and even would take care of physX cards. Man thats alot for the R600 to live up to.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: moonboy403
7 to 8 months means nothing

look at the G71 by nvidia, it was quite a flop as they had 7 to 8 months themselves in making it

G71 is a smart refresh by Nvidia. It's a die shrink, which makes chips about 30% cheaper to produce, they are selling the 7900GT's like hotcakes at $299, even though it's pretty much identical (feature and performance wise) to the 7800GTX. And because they've die shrunk the cores, they can afford to charge $299 for the 7900GT instead of just cutting costs on the more expensive to manufacture 7800GTX.

The real waste was the 7800GTX 512MB; that card was just a hype machine - something to let some of the air out of ATI's balloon. And even though it wasn't really available in any quantities, it worked in deflecting attention, and those who needed uber high end dual-card setups paid well over $1000 for dual 7800GTX 512MB SLI.
 

akugami

Diamond Member
Feb 14, 2005
6,210
2,551
136
The G71 is a flop only if you think of it in terms of advancing technology. It truly doesn't bring anything new to the table. It is however, an absolutely smart product. Reduce die size due to process shrink. This reduces costs assuming you get similar yields (something that is in contention at this point). Reduce transister count while retaining the same functionality as the previous generation. This also serves to reduce the die size of each chip. Reduce heat output and reduce power consumption. This makes for a more stable and less electricity bill. This also means you can use less exotic cooling solutions. All these serve to save nVidia money while producing a good product that the consumer is buying.

 

dunno99

Member
Jul 15, 2005
145
0
0
I think what would be interesting to see is the performance boost to XFire with the unified shader architecture. I think in XFire mode (I'm not totally sure about my theories, since I've never tested this out) at high resolutions, the vertex processor actually becomes the bottleneck. This is because each vertex has to be transformed on both cards to determine which card it belongs to (it's more complicated than that, since the triangles have to be clipped against the rendering configuration, i.e. AFR, tiled, split-screen, with the remaining pieces of the triangle distributed across the two cards according to the rendering configuration), basically reducing the vertex to pixel shader ratio by half. In essence (for X1900XT), it's going from 1:6 to 1:12 for vertex:pixel pipelines.

With the unified shader architecture, the system can dynamically allocate resources to balance out the bottlenecks if there is one...although only a faster CPU solves the CPU-bound problems.

As usual, correct me if I'm wrong.
 

Ika

Lifer
Mar 22, 2006
14,264
3
81
sounds cool. Wonder if ATI will beat Nvidia this time around...
 

Soccerman06

Diamond Member
Jul 29, 2004
5,830
5
81
Originally posted by: guoziming
sounds cool. Wonder if ATI will beat Nvidia this time around...

ATI didnt lose the last 2 rounds (R520 and refresh), more like a tie, but if you put AA+AF on and/or higher res, ATI comes out on top. SLI is better than Xfire but I doubt more than 1% of the world has it, so it doesnt really effect the overall popularity/ability of the card.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Still not buying anybodies claims Nvidia or ATI will be able to do what the PhysX cards can do, and if they can, at anywhere near the speed of the PhysX card.

On top of that, doesnt ATI require a proprietary API to access their physics?
Good luck getting anybody to seriously use it. While this is happening the guys making the PhysX card keep working with devs to get their product support. Which btw I am pretty impressed with the amount of titles working on support for the card, considering there is currently 0% penetration by the card.
 

BassBomb

Diamond Member
Nov 25, 2005
8,390
1
81
Originally posted by: akugami
The G71 is a flop only if you think of it in terms of advancing technology. It truly doesn't bring anything new to the table. It is however, an absolutely smart product. Reduce die size due to process shrink. This reduces costs assuming you get similar yields (something that is in contention at this point). Reduce transister count while retaining the same functionality as the previous generation. This also serves to reduce the die size of each chip. Reduce heat output and reduce power consumption. This makes for a more stable and less electricity bill. This also means you can use less exotic cooling solutions. All these serve to save nVidia money while producing a good product that the consumer is buying.

you obviously dont know much at what happens in the past... do u forget x800 -> x850 what was that? that wasnt even reduced heat or anything or process type



 

5150Joker

Diamond Member
Feb 6, 2002
5,549
0
71
www.techinferno.com
Originally posted by: Genx87
Still not buying anybodies claims Nvidia or ATI will be able to do what the PhysX cards can do, and if they can, at anywhere near the speed of the PhysX card.

On top of that, doesnt ATI require a proprietary API to access their physics?
Good luck getting anybody to seriously use it. While this is happening the guys making the PhysX card keep working with devs to get their product support. Which btw I am pretty impressed with the amount of titles working on support for the card, considering there is currently 0% penetration by the card.



Yep I agree, nVidia and ATi's solution are both half assed attempts at physics processing. If AGEIA can drop the price of their PhysX card and get a killer game, I'll gladly buy it.
 

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
Originally posted by: 5150Joker
Originally posted by: Genx87
Still not buying anybodies claims Nvidia or ATI will be able to do what the PhysX cards can do, and if they can, at anywhere near the speed of the PhysX card.

On top of that, doesnt ATI require a proprietary API to access their physics?
Good luck getting anybody to seriously use it. While this is happening the guys making the PhysX card keep working with devs to get their product support. Which btw I am pretty impressed with the amount of titles working on support for the card, considering there is currently 0% penetration by the card.



Yep I agree, nVidia and ATi's solution are both half assed attempts at physics processing. If AGEIA can drop the price of their PhysX card and get a killer game, I'll gladly buy it.


Yeap 300$ for a card that can't give us anything worthmentioning right now is BS..
I might be paying a lot of $$ for my gaming hobby but this card right now isn't anywhere near my needs.. It's just a showoff for the time being..
 

akugami

Diamond Member
Feb 14, 2005
6,210
2,551
136
Originally posted by: BassBomb
Originally posted by: akugami
The G71 is a flop only if you think of it in terms of advancing technology. It truly doesn't bring anything new to the table. It is however, an absolutely smart product. Reduce die size due to process shrink. This reduces costs assuming you get similar yields (something that is in contention at this point). Reduce transister count while retaining the same functionality as the previous generation. This also serves to reduce the die size of each chip. Reduce heat output and reduce power consumption. This makes for a more stable and less electricity bill. This also means you can use less exotic cooling solutions. All these serve to save nVidia money while producing a good product that the consumer is buying.

you obviously dont know much at what happens in the past... do u forget x800 -> x850 what was that? that wasnt even reduced heat or anything or process type

I don't get what you're trying to say. Please clarify.
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Think the vrzone is generally pretty accurate regarding nvidia, but with ati they still think the r520 has 32 pipes.
 

TanisHalfElven

Diamond Member
Jun 29, 2001
3,512
0
76
7. The R600 will also be the first practical implementation of ATI?s GPU concept. This is something we would be very interested in seeing because if this works as well as ATI claims, then apart from cutting down CPU load, it might put certain PhysX processor manufacturers out of business, simply because ATI cards would not need an additional card to do necessary computations for Physics. The onboard GPU will take care of it.

i don;t this. whats ati GPU concept.

3. 80/65nm fabrication process, though ATI wouldn?t elaborate on the exact split as well as why they weren?t sticking to just a single fabrication process.

and what does this mean.
 

Sable

Golden Member
Jan 7, 2006
1,130
105
106
Originally posted by: tanishalfelven
7. The R600 will also be the first practical implementation of ATI?s GPU concept. This is something we would be very interested in seeing because if this works as well as ATI claims, then apart from cutting down CPU load, it might put certain PhysX processor manufacturers out of business, simply because ATI cards would not need an additional card to do necessary computations for Physics. The onboard GPU will take care of it.

i don;t this. whats ati GPU concept.

3. 80/65nm fabrication process, though ATI wouldn?t elaborate on the exact split as well as why they weren?t sticking to just a single fabrication process.

and what does this mean.

1. I get the feeling they missed a couple of letters out. What they're referring to is using the graphics core to process physics instead of using a separate add in card like the PhysX.

2. At the moment ATI are producing GPU's using a 90nm process. It refers to the lengths of the gates on the transistors used in the core (I think). Reducing the size to 80nm will make the die smaller and lower production costs and also allows for higher clocks and lower temps.

What they mean is that they will begin production of the R600 using the 80nm process and during it's production life they plan to drop the process size down to 65nm.
 

thilanliyan

Lifer
Jun 21, 2005
12,046
2,261
126
Originally posted by: Sable
2. At the moment ATI are producing CPU's using a 90nm process. It refers to the lengths of the gates on the transistors used in the core (I think).

Yes I think it is the width of the gates, coincidentally, I just learned that in my Materials Physics course yesterday.