New Details on R600 emerging - ReScheduled to MAY Launch

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

magomago

Lifer
Sep 28, 2002
10,973
14
76
Originally posted by: apoppin
and it's AMD, DAAMiT
:D

they have decent marketing ... unlike former ATi

i can't believe you said that- AMD has sh**ty marketing

better than the non existant variety they had 5 years ago, or even 10 years ago

but it still sucks
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: magomago
Originally posted by: apoppin
and it's AMD, DAAMiT
:D

they have decent marketing ... unlike former ATi

i can't believe you said that- AMD has sh**ty marketing

better than the non existant variety they had 5 years ago, or even 10 years ago

but it still sucks
compare [oops, *contrast*] with ATi's crap and directionless marketing
:Q

AMD doesn't spend the money like intel, if that's what you mean ;)

--as they got spending capital their advertising improved dramatically over the last few years ... remember their ads of 5 years ago [gag]

the *ONE thing* that AMD lacked was ...


... ATi

i believe it's all *coming together* for them :)

 

the Chase

Golden Member
Sep 22, 2005
1,403
0
0
Well I luv to speculate and follow the news on various hardware and vid cards especially. And I'm sure R600 is gonna be a rockin' card. But when all is said and done the Beyond3D guess/inside info? of it being 5-10% faster than G80 sounds about right. Just enough to sell cards and be competitive so Nvidia can refresh and jump ahead by 5% so to sell more cards so AMD can refresh and sell more cards and repeat. They probably have the tech to make these things 1000% faster right now but then nobody would have to upgrade for 3-5 years and what would be the point of that?
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
i don't believe that

i think they *struggle* to reach each new 'high'

only occasionally do we get a 9700p .... or a 8800GTX

if nvidia could pull out a card that was twice as fast, they would have already done it
--just to *destroy* ATi ;)
 

Nelsieus

Senior member
Mar 11, 2006
330
0
0
As Uttar explained, there is still more performance left in G80 that will become prevalent with driver releases / optimizations. So, imo, a supposed +5% - +10% R600 lead could turn into a 0% or even -0% after such driver optimizations release by the time R600 is ready to launch in late March.

Granted, there will also be room for R600 to grow, as well. But by the time that took place, I'm assuming it'll be time to start discussing G81, since it is a few month maturity process.

Nontheless, it will be interesting to see how R600's massive bandwith will be utilized. And hearing that the top-end MSRP will be $599, it will definately be more competitve vs. a $549 8800GTX (though final performance will determine that I suppose).

I'm just happy to hear we have a launch date. Now I can atleast have a date to count-down to. :p

Nelsieus
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: thilan29
Originally posted by: apoppin
153.6 GB/s [is] dwarfing 8800GTX and its 86.4 GB/s ...

However, we have yet to see how exactly that bandwidth will be used. Correct me if I'm wrong but bandwidth by itseld doesn't mean much unless there is a Core able to fully utlize it??

That 77% bandwidth advantage obviously won't translate directly to a 77% performance advantage(If ATI managed that I would sell my GTS instantly and get the equivalent ATI card).

This is very important, memory bandwidth is a distant second in terms of effects on performance considering the amounts we are talking about right now.

Compare something like a X1900 XTX vs X1950 XTX or a Geforce 6600 GT vs Geforce 6800.

This is almost analagous to the AM2 (K8) vs LGA775 (Core) memory bandwidth comparison.

We will only likely see the effects being pronounced enough at considerable resolution and AA + AF levels. R600's Shader capabilities are far more important.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: coldpower27
Originally posted by: thilan29
Originally posted by: apoppin
153.6 GB/s [is] dwarfing 8800GTX and its 86.4 GB/s ...

However, we have yet to see how exactly that bandwidth will be used. Correct me if I'm wrong but bandwidth by itseld doesn't mean much unless there is a Core able to fully utlize it??

That 77% bandwidth advantage obviously won't translate directly to a 77% performance advantage(If ATI managed that I would sell my GTS instantly and get the equivalent ATI card).

This is very important, memory bandwidth is a distant second in terms of effects on performance considering the amounts we are talking about right now.

Compare something like a X1900 XTX vs X1950 XTX or a Geforce 6600 GT vs Geforce 6800.

This is almost analagous to the AM2 (K8) vs LGA775 (Core) memory bandwidth comparison.

We will only likely see the effects being pronounced enough at considerable resolution and AA + AF levels. R600's Shader capabilities are far more important.

it is *interesting* to see my original quote taken so FAR out of it's *original context*. :p

we already agreed that +10% over the gtx seemed *reasonable* and that the bandwith wouldn't mean "much" ;)

i was just commenting on the 'amount' of bandwith over a 512bit interface
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
I'm just glad that crossfire master cards / slave cards are gone. That way we don't have 2 different versions of the flagship card at different price-points but same performance specs. Going dual GPU's will be easier and less expensive this round.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: apoppin
Originally posted by: coldpower27
Originally posted by: thilan29
Originally posted by: apoppin
153.6 GB/s [is] dwarfing 8800GTX and its 86.4 GB/s ...

However, we have yet to see how exactly that bandwidth will be used. Correct me if I'm wrong but bandwidth by itseld doesn't mean much unless there is a Core able to fully utlize it??

That 77% bandwidth advantage obviously won't translate directly to a 77% performance advantage(If ATI managed that I would sell my GTS instantly and get the equivalent ATI card).

This is very important, memory bandwidth is a distant second in terms of effects on performance considering the amounts we are talking about right now.

Compare something like a X1900 XTX vs X1950 XTX or a Geforce 6600 GT vs Geforce 6800.

This is almost analagous to the AM2 (K8) vs LGA775 (Core) memory bandwidth comparison.

We will only likely see the effects being pronounced enough at considerable resolution and AA + AF levels. R600's Shader capabilities are far more important.

it is *interesting* to see my original quote taken so FAR out of it's *original context*. :p

we already agreed that +10% over the gtx seemed *reasonable* and that the bandwith wouldn't mean "much" ;)

i was just commenting on the 'amount' of bandwith over a 512bit interface

I just wanted to make sure, this point is made clear, not considering the "original context".
Even if you had agreed with it, I simply wanted to emphasize it.

The Amount of bandwidth over a 512Bit Interface is 33% higher over a 384 Bit Interface simple mathematics, if the memory clocks are identical. In the case of the R600 and if the ram clocks indicated are true, then you get an almost 78% increase in bandwidth, due to the 33% increase in memory clocks, on top of the 33% increase in bit width.

Though it is nice to see a video card breaching the 100GB/s of memory bandwidth barrier.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: josh6079
I'm just glad that crossfire master cards / slave cards are gone. That way we don't have 2 different versions of the flagship card at different price-points but same performance specs. Going dual GPU's will be easier and less expensive this round.

I agree, this is a nice feature, glad to see ATI matching Nvidia in this regard.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Im just wondering what the driver state of the new R600 chip will be. DX10, Vista and crossfire along with other things could mean ATi potentially needs to revise their current drivers. (if not a major overhaul like nVIDIA).

Must be a nightmare for both the forceware and catalyst team. Drinking caffine all day, staring in front of the comptuer screen for endless hours, looking for small bugs in between thousand lines of codes, one fix screws up something, and etc.

Anyway, lets play around with some numbers. This is my current guess for the R600.

X2800XTX (the fastest version)
80nm
720 million transistors
850mhz core clock speed
2400mhz memory clock speed
64 vec4 shaders
Unified shader architecture
Ring Bus technology
512bit memory interface
GDDR4 1024mb
153Gb/s
16 ROPs
16 Texture Sampling Units i.e it will have 16 TMUs
32 Texture Filtering Units
12" long
Dual slot
8+6 pin (meaning its will be sucking at a maximum of 250W)
PCI-e x16 version 2 certified
Crossfire built in.
Maybe some other stuff.




 

Drayvn

Golden Member
Jun 23, 2004
1,008
0
0
Do you think that all the bandwidth in the Memory is maybe for some new fandangled AA or AF they have which could be better than nVidias or like we can have 32xAA or something.

From what i remember or might have made up but AA and AF use a lot of memory and bandwidth right?
 

BlizzardOne

Member
Nov 4, 2006
88
0
0
Originally posted by: Cookie Monster
Im just wondering what the driver state of the new R600 chip will be. DX10, Vista and crossfire along with other things could mean ATi potentially needs to revise their current drivers. (if not a major overhaul like nVIDIA).

Must be a nightmare for both the forceware and catalyst team. Drinking caffine all day, staring in front of the comptuer screen for endless hours, looking for small bugs in between thousand lines of codes, one fix screws up something, and etc.

Anyway, lets play around with some numbers. This is my current guess for the R600.

X2800XTX (the fastest version)
80nm
720 million transistors
850mhz core clock speed
2400mhz memory clock speed
64 vec4 shaders
Unified shader architecture
Ring Bus technology
512bit memory interface
GDDR4 1024mb
153Gb/s
16 ROPs
16 Texture Sampling Units i.e it will have 16 TMUs
32 Texture Filtering Units
12" long
Dual slot
8+6 pin (meaning its will be sucking at a minmum of 250W)
PCI-e x16 version 2 certified
Crossfire built in.
Maybe some other stuff.

How'd you figure minimum of 250W?

If it's got the PCIe slot providing 75W, the 6-pin providing 75W and the 8-pin provinding 100W (or whatever) to make 250W, that would be the maximum.. otherwise it'd exceed the power provided and just fall over.

Aside from that, I mostly agree.. though, maybe 96 vec4 shaders, core @ 720-750mhz, and I reckon then can squeeze it on a 9" PCB to satisfy those who don't have monster cases, like CM stackers :)
 

Nelsieus

Senior member
Mar 11, 2006
330
0
0
Originally posted by: Josh6079I'm just glad that crossfire master cards / slave cards are gone. That way we don't have 2 different versions of the flagship card at different price-points but same performance specs. Going dual GPU's will be easier and less expensive this round.

With how high PSU requirements are getting, I'm begining to think dual-GPU setups are a little less feasible than they used to be.

Atleast it drove me away from my plants to go that route.

Nelsieus
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Yea my bad. Maximum of 250W. (but im doesnt PCI-e version 2 raise the maximum power draw of 75w to something much more higher?)

96 vec4 shaders are just too much. Im thinking most possibly 64 vec4 shaders because 64 vec 4 shaders @ 675mhz match the max theorectical shader performance of 128 scalar shaders @ 1350mhz.

So i do think it will be 64 vec4 shaders. Its already enough to have twice the shader power of R580.
 

trOver

Golden Member
Aug 18, 2006
1,417
0
0
this thing is directx 10 compatible, am i correct?

also- a pci-e slot can provide 75w of power? wow thats a lot running through a mobo...
 

miniMUNCH

Diamond Member
Nov 16, 2000
4,159
0
0
Originally posted by: trOver
this thing is directx 10 compatible, am i correct?

also- a pci-e slot can provide 75w of power? wow thats a lot running through a mobo...

Nope. Sorry... The R600, designed for DX10 gaming, will, sadly, not support DX10.;)

Yes... it will support DX10.
 

Zebo

Elite Member
Jul 29, 2001
39,398
19
81
Originally posted by: trOver


also- a pci-e slot can provide 75w of power? wow thats a lot running through a mobo...

You never heard of Pentium D's...? At the high end pumping over 180W though the motherboard stock..up to 300W OCed.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: Zebo
Originally posted by: trOver


also- a pci-e slot can provide 75w of power? wow thats a lot running through a mobo...

You never heard of Pentium D's...? At the high end pumping over 180W though the motherboard stock..up to 300W OCed.

Nevermind the quadFX from AMD. Those suckers can really eat up power.
 

trOver

Golden Member
Aug 18, 2006
1,417
0
0
wow ok i just really never thought about it that much, cause my 6600 only uses 65W, and my 7800gtx has its own power cable, so i thought all the power went through that.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: Cookie Monster
Yea my bad. Maximum of 250W. (but im doesnt PCI-e version 2 raise the maximum power draw of 75w to something much more higher?)

The PCI-SIG website is a bit confusing and you can't download the spec sheets without being a member. But from what I can gather:

PCIe 1.0 - Supplies 75W through the slot
PCIe 1.0 x16 Graphics 150W-ATX Spec 1.0 - Supplies 150W through the slot
PCIe 2.0 - Supplies 225W through the slot
 

coolpurplefan

Golden Member
Mar 2, 2006
1,243
0
0
I'm freaking crossing my fingers that my Seasonic S-12 500 watt will survive. I don't want to pay more than $250 which means I"d be waiting until next summer. By that time UT2007 will be out (I hope). According to the power supply calculator on extreme outervision, I'd be at around 500 watts at peak. Holy cow. Oh well, at least the Seasonics are known for being efficient at full load.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: Creig
Originally posted by: Cookie Monster
Yea my bad. Maximum of 250W. (but im doesnt PCI-e version 2 raise the maximum power draw of 75w to something much more higher?)

The PCI-SIG website is a bit confusing and you can't download the spec sheets without being a member. But from what I can gather:

PCIe 1.0 - Supplies 75W through the slot
PCIe 1.0 x16 Graphics 150W-ATX Spec 1.0 - Supplies 150W through the slot
PCIe 2.0 - Supplies 225W through the slot

Thanks. R600 is supposedly rumoured to be PCIe 2.0 compliant. Not sure how that will pan out though.

I found this from Vr-zone.

Link

Looking at this news post in our forum, there is a rumor coming out from the websites in China currently that AMD is planning to launch R600 on 8th or 9th March and be officially named as Radeon X2K series. The mass production silicon is expected to be still at A12 for the R600 cards at launch.

It seems like ATi has successfully taped out a A15 silicon that is able to work at 1GHz probably we can hope to see another series of R600 cards with higher working clocks in the near future. The article suggests ATi's next generation R680 is scheduled for Q3 launch but suggested that NVIDIA will have G81 and the next gen. GPU ready to compete.

According to our sources but still a rumor revealed R600 as X2800 series. There are 2 variants for XTX; 1GB and 512MB. X2800XTX runs at 750MHz core, 2.2GHz memory, 512-bit memory interface and GDDR4 while XT runs at 600MHz core, 1.8GHz memory, 512-bit memory interface and GDDR3.

Looks like the XTX will be the beast and there will be a huge gap between the XT and the XTX, just like the GTX/GTS scenario.

Its going to be close!
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
To make a jump from 750Mhz to 1GHz would represent a HUGE increase in clockspeed. I wonder if 1GHz is a typically achieveable speed from the A15 or if they stumbled across a cherry core that just happened to work at that speed.