All AMD R6xx chips are 65 nanometre chips, now

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Gstanfor
Originally posted by: RyanVM
I don't find this too hard to believe and here's why. It makes sense to me that a company planning to release a chip roughly around the same time they're preparing to do a die shrink would spend some time doing the designs (80nm and 65nm) concurrently in the event it becomes necessary for whatever reason. I highly doubt the decision was "Hmm, 80nm doesn't work. We need to design a 65nm version quick!" I'm guessing the conversation was more along the lines of "OK, 80nm isn't working well. Let's delay a month and switch to plan B (65nm)."

In other words, I think they planned R600 to be an 80nm chip. I also think they worked on a 65nm design at the same time as a contingency plan. When they saw they weren't going to have an acceptable 80nm chip, they delayed the release to ramp up 65nm production. Finally, many months later they announced the delay under the silly premise of having a coherent product line (as if that's ever stopped them in the past), since there's no way you can switch process technology 6 weeks before release.

Makes sense to me anyway :p

I think its far more likely that what happened was R600 failed to work correctly and AMD moved straight to its refresh version (just like nvidia cancelled nv30 production and moved to nv35) It just so happened that the refresh was on a different process node, thats all.

nvidia was honest about moving to a new chip at the time, I wonder if AMD will be or if they will try to pass it off as R600?

that's actually what *he said*

:p

i made the same mistake and he jumped all over me
:Q

well

:D
=============
Originally posted by: Zstream
A dragon that eats wolves for a snack.

in a fantasy novel

Dragons don't exist :p

:roll:
 

schneiderguy

Lifer
Jun 26, 2006
10,801
91
91
Weren't people claiming around a year ago that ATI would have so much of a headstart in the DX10 generation of cards because of the xbox360 gpu with it's unified shaders and the experience they had "helping" Microsoft develop DX10? :confused:

... I guess it didnt help them much :eek: Hopefully for AMD R600 will be at least 10-15% faster than G80, or I doubt they will get many sales from their high end cards this generation.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: schneiderguy
Weren't people claiming around a year ago that ATI would have so much of a headstart in the DX10 generation of cards because of the xbox360 gpu with it's unified shaders and the experience they had "helping" Microsoft develop DX10? :confused:

... I guess it didnt help them much :eek: Hopefully for AMD R600 will be at least 10-15% faster than G80, or I doubt they will get many sales from their high end cards this generation.

yeah, me

but not so exaggerated as "so much of a headstart" blah blah :p

don't blame the lack of HW on the SW and visa versa
--they could still beat nvidia with the drivers ;)
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
CeBIT: AMD Presents R600 DirectX10 GPU
AMD gave details about its upcoming R600 Graphics Processor Unit (GPU) at CeBIT, in a very interesting presentation for hardware enthusiasts.

Rick Bergman, Senior Vice President and General Manager of Graphics Product Group AMD, talked about AMD's first DirectX 10 GPU, which features an HDMI output, as well as a hardware decoder engine for MPEG-4 AVC/VC-1.

The R600 could be considered as the second generation of the ATI Xenos GPU, which powers Microsoft's Xbox 360 game console. It features 48 Unified Shaders and fully supports DirectX 10 and Shader Model 4.0.

In addition, the R600 is the first GPU that integrates the Universal Video Decoding (UVD) engine for hardware video decoding, plus a 3D engine for rendering. According to AMD, this configuration offers a significantly higher performance, since the workload of CPU during play-back of HD video is transferred to the GPU and specifically the UVD engine itself. In addition the shaders are off-loaded since video decoding is not dependent on the 3D engine.

Compared to previous configurations of GPUs that did not take advantage of the UVD engine, the R600 is able to decode 1080p Blu-Ray and HD DVD video (MPEG-4 AVC) at 40Mbps, which is significantly higher than the previously 25Mbps, mainly limited by the CPU speed. Dual-stream decoding is also supported too allowing the Universal Video Decoder to decode two video streams at the same time for picture-in-picture features found in next generation movie titles.

Furthermore, Bergman gave details about the HDMI digital output interface that is supported by the R600. AMD has chosen to integrate an HD-Audio controller into the GPU, and not into the south bridge chipset as rival Nvidia has announced so far. The essential HDCP encryption keys for reproduction of HD commercial video titles are stored in to the GPU, allowing both HD audio and video signal to be driven to the HDMI output, after the essential synchronization. Note that video and audio signals will be merged only when the HDMI interface is in use.

Bergman also said that AMD is preparing an 65nm version of the R600, that will feature a lower power consumption and will be used in the future DirectX 10-enables notebooks, possibly available later this year.

However, power consumption of the R600 series of GPUs is high. The consumption of the high-end model is expected to be higher than 250W, while for the mainstream version (RV610), it will be around 100W. This could be an issue for OEM PC manufacturers. A low-end model (RV630) for notebooks will consume approximately 25-35W.

Bergman did not gave any details about the availability of the R600 GPUs or AMD's OEM partners that will offer it. According to industry sources, the first products should be expected in April.

I know the article *probably* means xenos & not R600 when it mentions 48 unified shaders -- i'm just tormenting (at least until we hear something more - there had *better be* more than 48US in r600 if AMD's graphics division wants any chance of survival).
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
Bergman also said that AMD is preparing an 65nm version of the R600, that will feature a lower power consumption and will be used in the future DirectX 10-enables notebooks, possibly available later this year.

So pretty much the R600 we'll see in April-May is going to be 80nm and then R600's refresh will be 65nm.

I'd be weary of buying an R600 before I know that the 65nm version isnt going to come out 2 months later.

EDIT: Now that I read that article again, it makes NO mention of a 65nm high end R600 part. It only mentions DX10 enabled notebooks.
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
The article doesnt even make it sound like RV610 and RV630 are going to be 65nm.

Someone is lying out of their butts. I'm not shy to say that it is probably the Inq.

I cant believe that AMD would stand up on stage at CeBit and say that R600 would not be 65nm and then give us 65nm.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Speaking of the Inquirer

R600 launch to be biggest in GPU history
R600 launch to be biggest in GPU history

CeBIT 007 65nm GPUs launch with almost 200 hacks

By Theo Valich in Hanover: Saturday 17 March 2007, 19:09
WITH SO MANY HACKS constantly talking to AMD's partners about the R600 and tapping in the dark about the launch, everything is at fever pitch in the press office.

But the details are as follows. AMD is preparing the biggest launch in history of graphics chips.

Since, we understand, the company pulled an unexpected move with a complete switch to a 65 nm process at TSMC, it seems that a new manufacturing alliance is emerging, since AMD is currently in various partnership deals with IBM and so Sony, Toshiba, Chartered, UMC and of course, TSMC.

AMD will probably ship well over 100 million 65 nanometre chips in 2007, and decided to start their own graphics adventure with a bang. We learned that the firm is inviting a ton of partners for a huge 3D-wide PR briefing - close to 200 journos will arrive at the press event and the plan is for each and every member of the press to get a taste of 65 nano chippery. µ

AMD will probably ship well over 100 million 65 nanometre chips in 2007, and decided to start their own graphics adventure with a bang.
Well, I guess you could see things in that sort of light if you considered AMD's graphics marketshare as a balloon and nvidia's market share as a giant pin popping AMD's balloon :D

AMD is preparing the biggest launch in history of graphics chips.
nv30 & family had an expensive & glamorous launch too (though to be fair nv40's launch was pretty flash also).
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Gstanfor
Speaking of the Inquirer

R600 launch to be biggest in GPU history
R600 launch to be biggest in GPU history

CeBIT 007 65nm GPUs launch with almost 200 hacks

By Theo Valich in Hanover: Saturday 17 March 2007, 19:09
WITH SO MANY HACKS constantly talking to AMD's partners about the R600 and tapping in the dark about the launch, everything is at fever pitch in the press office.

But the details are as follows. AMD is preparing the biggest launch in history of graphics chips.

Since, we understand, the company pulled an unexpected move with a complete switch to a 65 nm process at TSMC, it seems that a new manufacturing alliance is emerging, since AMD is currently in various partnership deals with IBM and so Sony, Toshiba, Chartered, UMC and of course, TSMC.

AMD will probably ship well over 100 million 65 nanometre chips in 2007, and decided to start their own graphics adventure with a bang. We learned that the firm is inviting a ton of partners for a huge 3D-wide PR briefing - close to 200 journos will arrive at the press event and the plan is for each and every member of the press to get a taste of 65 nano chippery. µ

AMD will probably ship well over 100 million 65 nanometre chips in 2007, and decided to start their own graphics adventure with a bang.
Well, I guess you could see things in that sort of light if you considered AMD's graphics marketshare as a balloon and nvidia's market share as a giant pin popping AMD's ballon :D

nv30 & family had an expensive & glamorous launch too (though to be fair nv40's launch was pretty flash also).

why does this launch seem like deja vu?
:confused:

wasn't there already a hard launch scheduled for r600 - to be the largest in GPU history ... blah blah blah ... this month? :p

the inq is running off at the BS mouth again ... with their 65nm speculation
:thumbsdown:
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: Gstanfor
CeBIT: AMD Presents R600 DirectX10 GPU
AMD gave details about its upcoming R600 Graphics Processor Unit (GPU) at CeBIT, in a very interesting presentation for hardware enthusiasts.

Rick Bergman, Senior Vice President and General Manager of Graphics Product Group AMD, talked about AMD's first DirectX 10 GPU, which features an HDMI output, as well as a hardware decoder engine for MPEG-4 AVC/VC-1.

The R600 could be considered as the second generation of the ATI Xenos GPU, which powers Microsoft's Xbox 360 game console. It features 48 Unified Shaders and fully supports DirectX 10 and Shader Model 4.0.

In addition, the R600 is the first GPU that integrates the Universal Video Decoding (UVD) engine for hardware video decoding, plus a 3D engine for rendering. According to AMD, this configuration offers a significantly higher performance, since the workload of CPU during play-back of HD video is transferred to the GPU and specifically the UVD engine itself. In addition the shaders are off-loaded since video decoding is not dependent on the 3D engine.

Compared to previous configurations of GPUs that did not take advantage of the UVD engine, the R600 is able to decode 1080p Blu-Ray and HD DVD video (MPEG-4 AVC) at 40Mbps, which is significantly higher than the previously 25Mbps, mainly limited by the CPU speed. Dual-stream decoding is also supported too allowing the Universal Video Decoder to decode two video streams at the same time for picture-in-picture features found in next generation movie titles.

Furthermore, Bergman gave details about the HDMI digital output interface that is supported by the R600. AMD has chosen to integrate an HD-Audio controller into the GPU, and not into the south bridge chipset as rival Nvidia has announced so far. The essential HDCP encryption keys for reproduction of HD commercial video titles are stored in to the GPU, allowing both HD audio and video signal to be driven to the HDMI output, after the essential synchronization. Note that video and audio signals will be merged only when the HDMI interface is in use.

Bergman also said that AMD is preparing an 65nm version of the R600, that will feature a lower power consumption and will be used in the future DirectX 10-enables notebooks, possibly available later this year.

However, power consumption of the R600 series of GPUs is high. The consumption of the high-end model is expected to be higher than 250W, while for the mainstream version (RV610), it will be around 100W. This could be an issue for OEM PC manufacturers. A low-end model (RV630) for notebooks will consume approximately 25-35W.

Bergman did not gave any details about the availability of the R600 GPUs or AMD's OEM partners that will offer it. According to industry sources, the first products should be expected in April.

I know the article *probably* means xenos & not R600 when it mentions 48 unified shaders -- i'm just tormenting (at least until we hear something more - there had *better be* more than 48US in r600 if AMD's graphics division wants any chance of survival).

Maybe this is why they need to attain 1GHz core clocks? Or so I heard? I doubt it is only 48US if Xenos was "only" 48US. Minimum will be 64US and max would be 160US. Too bad they just will not let out the specs at least. What harm can it do?
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
the inq is running off at the BS mouth again ...
Can't believe I'm hearing apoppin complain about the Inq.'s BS. Normally he's supporting them and explaining how they don't get that many things wrong.

If they made the move to 65nm, great. If not, oh well. Meantime nVidia will just tweak their drivers and sell cards. ATi can dance in their pants for however long they feel like.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
Originally posted by: Matt2
So was that Ruby movie rendered in real time on an R600 using DX10?

Yes it is and it beats Nvidia's Demo by far!
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: josh6079
the inq is running off at the BS mouth again ...
Can't believe I'm hearing apoppin complain about the Inq.'s BS. Normally he's supporting them and explaining how they don't get that many things wrong.

If they made the move to 65nm, great. If not, oh well. Meantime nVidia will just tweak their drivers and sell cards. ATi can dance in their pants for however long they feel like.

you heard me complain about theInq before :p

when the article contradicts itself and known rules of logic, it is BS

-- and the 'hint' is --they use a LOT of words and say nothing
;)

 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Ok apoppin, if you don't believe this 65nm rumor, then what do you think the 'reason' for the R600's lateness is? :Q

The rumor makes sense to me. It coincides with the power consumption rumors, which were 300w a few months ago, and now ATI is 'officially' saying 180-200w depending on the card.

It makes more sense to me than any other 'reason' I can think of.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: SickBeast
Ok apoppin, if you don't believe this 65nm rumor, then what do you think the 'reason' for the R600's lateness is? :Q

The rumor makes sense to me. It coincides with the power consumption rumors, which were 300w a few months ago, and now ATI is 'officially' saying 180-200w depending on the card.

It makes more sense to me than any other 'reason' I can think of.

-ok ... this is *my opinion* -



i think r600 is about a year late ... they had a *serious* HW bug similar to x1800

in the *meantime* they have been working on r660 and mainstream 65nm products

well, they announced a Hard launch, thinking they "fixed" r600, but they didn't and cancelled it

BUT they respun r600 the 15th time and - voila - it *now works* ... at still high power consumption - 240w for the flagship; less for the lesser cards

and COINCIDENTLY - *everything else* is ready - mainstream - and even the added crap they were planning for r660 [the HW decoding, etc] ... so AMD announces "we *planned* it this way"

so they release the working r600 finally ... next month

and we'll see the smaller and less power-hungrey r660 when it's 'needed' to counter nvidia's 8950

^^ my take ^^
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
If the mainstream rv630 cards are ready, it would be stupid for Ati to not release them by now. Also, the performance target for r600 must be about 2x the performance of r580, so in that regard I don't care if they use 96 shaders or 48, as long as the performance is there.
 

Soccerman06

Diamond Member
Jul 29, 2004
5,830
5
81
Originally posted by: Cookie Monster
65nm? what so the INQ thinks you can just jump to 65nm in 5 months?

What people dont understand is that from 90nm to 80 nm is possible, because its just an optical shrink of the GPU architecture. However from 80nm to 65nm is another story. 65nm is a totally different process, and the fact is that you cannot just shrink to 65nm. The architecture itself has to be revised, tweaked and changed to accomodate the new process.

Unless they had two teams working on both 80nm and 65nm from a long time ago, i just dont see this as possible. Scraping the 80nm design is financially illogical to do so. The time period for such move is too short to be actually believable.

Whats your basis of saying that? You dont know the cost of how much a chip reworking costs compared to another and their specific arch. It may be financially smart to do something like this because they dont have to burn all that money on 80nm and then rework it for 65nm. Until you see specific numbers, you cant say either way if it was a good move or bad.
 

allies

Platinum Member
Jun 18, 2002
2,572
0
71
Originally posted by: munky
If the mainstream rv630 cards are ready, it would be stupid for Ati to not release them by now. Also, the performance target for r600 must be about 2x the performance of r580, so in that regard I don't care if they use 96 shaders or 48, as long as the performance is there.

Isn't the 8800GTX 2x the power of an R580...?

Actually it's as fast as them Crossfired, right? And the overhead of Xfire is like... 10%? So the R600 will be ~5-15% faster than the 8800GTX?
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: allies
Originally posted by: munky
If the mainstream rv630 cards are ready, it would be stupid for Ati to not release them by now. Also, the performance target for r600 must be about 2x the performance of r580, so in that regard I don't care if they use 96 shaders or 48, as long as the performance is there.

Isn't the 8800GTX 2x the power of an R580...?

Actually it's as fast as them Crossfired, right? And the overhead of Xfire is like... 10%? So the R600 will be ~5-15% faster than the 8800GTX?

I've looked at the shadermark results between r580, r580 CF and the g80. In a few cases the g80 is as fast as r580 CF, but in most cases it's about 1.7x as fast as the r580, and r580 CF is 1.95-2x the performance of r580. These are synthetic shader tests, and in actual games the results may vary, but it gives a good idea of how a gpu performs under heavy shader load.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: Gstanfor
Speaking of the Inquirer

R600 launch to be biggest in GPU history
R600 launch to be biggest in GPU history

CeBIT 007 65nm GPUs launch with almost 200 hacks

By Theo Valich in Hanover: Saturday 17 March 2007, 19:09
WITH SO MANY HACKS constantly talking to AMD's partners about the R600 and tapping in the dark about the launch, everything is at fever pitch in the press office.

But the details are as follows. AMD is preparing the biggest launch in history of graphics chips.

Since, we understand, the company pulled an unexpected move with a complete switch to a 65 nm process at TSMC, it seems that a new manufacturing alliance is emerging, since AMD is currently in various partnership deals with IBM and so Sony, Toshiba, Chartered, UMC and of course, TSMC.

AMD will probably ship well over 100 million 65 nanometre chips in 2007, and decided to start their own graphics adventure with a bang. We learned that the firm is inviting a ton of partners for a huge 3D-wide PR briefing - close to 200 journos will arrive at the press event and the plan is for each and every member of the press to get a taste of 65 nano chippery. µ

Could the Inq climb up AMD's butt any further? Not only will it be the biggest launch in history but Godzilla will wrestle King Kong in a cage match with Elvis as the referee.