Now that ATI/AMD Lost the war....

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: golem
Well, the HD 2900 has 700~ million transistors and the G80 has 690M~, and since the 2900 is on a smaller (80nm) process it should, at least in theory, be cheaper for ATI to produce.

Mid-range and low-end R600 parts will be cheaper for ATI to produce because they are on a 65nm proces, and nVidia's 8500/8600 are on a 80nm process.

Not sure of this, but don't fabs charge more if you use a more advanced process?

The lower the node the more expensive it gets typically if all factors are held constant. Like if both die sizes are the same and each process have the same amount of fancy techs. Then the lower node is more expensive.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: Nelsieus
Originally posted by: Extelleron
Originally posted by: AmdInside
I am really curious to know what ATI's margain is for the Radeon 2900XT. Since the X800 series, ATI's profot margains have been terrible, worse than the GeforceFX days for NVIDIA. I sure wish someone could interview AMD and find out why they had so much trouble taping out the Radeon 2900XT. Wasn't it supposed to originally launch in November of last year?

Well, the HD 2900 has 700~ million transistors and the G80 has 690M~, and since the 2900 is on a smaller (80nm) process it should, at least in theory, be cheaper for ATI to produce.

Mid-range and low-end R600 parts will be cheaper for ATI to produce because they are on a 65nm proces, and nVidia's 8500/8600 are on a 80nm process.

Actually, I believe x2900XT is roughly 420mm^2 on 80nm compared to 484mm^2 of G80 on 90nm. In other words, G80 is only about 13% bigger than R600, yet retails for about 27% more (along with that similiar match in performance). Therefore, it's quite likely that despite the die size advantage of R600, G80 is driving higher margins (which nVidia even stated via their latest conference call).

ATI has traditionally had lower margins in their GPU sectors throughout the past few years. I believe their corporate margin last fall (prior to AMD's buyout) was in the low 30's. Even though R600's margins are most likely higher than this, they aren't high enough to bring up an already low corporate margin average. For ATI, that might not seem like a big deal. But for AMD, it's a little more problematic.

But I think there is some hope in regards to R630. If ATI can get it out by the end of June with competitive performance to G84 (which shouldn't be hard), then that will definately be a turn-around for them. ALthough I'm not sure how quickly nV might rebuttal with their own die shrink plans. At the moment, I haven't seen in news on such a launch spoiler.

Nelsieus

Not all G80 parts are selling for more than HD 2900XT. 8800GTS 320/640 cards are still G80 just with units disabled... it doesn't cost nVidia anything less to produce an 8800GTS core than an 8800GTX core. With nVidia selling 8800GTS's with 320MB of memory for as low as $250~, their profits can't be that great, especially considering the majority of G80's sold are likely the GTS models.
 

Nelsieus

Senior member
Mar 11, 2006
330
0
0
Originally posted by: Extelleron
Originally posted by: Nelsieus
Originally posted by: Extelleron
Originally posted by: AmdInside
I am really curious to know what ATI's margain is for the Radeon 2900XT. Since the X800 series, ATI's profot margains have been terrible, worse than the GeforceFX days for NVIDIA. I sure wish someone could interview AMD and find out why they had so much trouble taping out the Radeon 2900XT. Wasn't it supposed to originally launch in November of last year?

Well, the HD 2900 has 700~ million transistors and the G80 has 690M~, and since the 2900 is on a smaller (80nm) process it should, at least in theory, be cheaper for ATI to produce.

Mid-range and low-end R600 parts will be cheaper for ATI to produce because they are on a 65nm proces, and nVidia's 8500/8600 are on a 80nm process.

Actually, I believe x2900XT is roughly 420mm^2 on 80nm compared to 484mm^2 of G80 on 90nm. In other words, G80 is only about 13% bigger than R600, yet retails for about 27% more (along with that similiar match in performance). Therefore, it's quite likely that despite the die size advantage of R600, G80 is driving higher margins (which nVidia even stated via their latest conference call).

ATI has traditionally had lower margins in their GPU sectors throughout the past few years. I believe their corporate margin last fall (prior to AMD's buyout) was in the low 30's. Even though R600's margins are most likely higher than this, they aren't high enough to bring up an already low corporate margin average. For ATI, that might not seem like a big deal. But for AMD, it's a little more problematic.

But I think there is some hope in regards to R630. If ATI can get it out by the end of June with competitive performance to G84 (which shouldn't be hard), then that will definately be a turn-around for them. ALthough I'm not sure how quickly nV might rebuttal with their own die shrink plans. At the moment, I haven't seen in news on such a launch spoiler.

Nelsieus

Not all G80 parts are selling for more than HD 2900XT. 8800GTS 320/640 cards are still G80 just with units disabled... it doesn't cost nVidia anything less to produce an 8800GTS core than an 8800GTX core. With nVidia selling 8800GTS's with 320MB of memory for as low as $250~, their profits can't be that great, especially considering the majority of G80's sold are likely the GTS models.

Well first off, you have to realize the Geforce 8800GTS has a 320-bit interface compared to the 384-bit of the GTX. Furthermore, it's VRAM is cut down (especially 320MB of the GTS vs. the 768MB GTX). Volume is much higher at this segment, as well (sub $300).

According to their cc last week, corporate margins are at all time high levels for nVidia (45%), also stating that GF8800 margins were high, too.

As mentioned above, as well, the more mature process and higher yields you have, the cheaper it is to produce. I've seen no indication that either of these factors are less than optimal right now for the G80 line.

Nelsieus
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Fox5
Originally posted by: vaccarjm
What could we do as consumers to keep Nvidia producing Faster cards every 6 months? They now have control as to when they wanna release the next gen card hence milk the market with the 8800gtx. All those peeps waiting on the ati card will now be buying the 880gtx that came out last year.

You can buy their newest card in droves when it first comes out, then everyone needs to stop buying 6 months afterwards, then buy again when a new product comes out, but only if it meets your criteria.

Originally posted by: Pugnate
Originally posted by: Amuro
Originally posted by: OneOfTheseDays
ATI and Nvidia have been neck-and-neck ever since 3D graphics became a reality on the PC.
I thought nVidia had been kicking ATi's ass for years until ATi released the 9700Pro?

ATi started with all the ass kicking until 3Dfx and Nvidia were born.

My favorite ATi card though was the 8500. It provided the same performance as the competition at a $100 less. I loved them for it.

The time they truely owned was the 9700. I can remember the PR guy actually excited at the unveiling.

He said something along the lines of,"It is my job to tell you we are offering a revolutionary product. But this time I am not lying."

ATI wasn't really in the 3d space for very long until 3dfx and nvidia showed up, and I don't think they were particularly kickass.
ATI has always had overpromised and underperforming parts (overly ambitious engineering) until the 9700 pro, which was an outside development anyway.
Their voodoo killers were all duds. Their geforce killer couldn't compete. Their geforce 2 killer fell by the wayside very quickly. Their geforce 3 killer tried to take on the geforce 4. They always released 6 months too late, until the 9700 pro. Didn't take long for ati's management to get ArtX's engineers on the traditional ATI path.

You forgot that ATi kept it's technology leadership for quite a while. The Radeon 7500 was the first chip with hybrid DX8.0 support (Albeit useless), also was the first company what implemented 2 GPU's on same PCB, and then the 8500 was the first card whith DX 8.1 support and no GeForce 3 or GeForce 4 was more technically advanced than it, even though it's performance was lacking, was able to outperform the GeForce 3 by a small margin and barely stay competitive with the GeForce 4 Ti 4200, they driver support in that time was laughable. Even though the R300 was developed by ArtX and finished by ATi, they developed the R520 which is far more advanced than any GeForce 6/7 series of card and now the R600 which is a bit more advanced with some nice implementations like tesselations (It would look like Truform is back eh?, but isn't) and more complex shaders unit that are able to execute up to 5 operations on each, a bit hard to accomplish by the software. But thanks to both companies, for staying competitive for a long time, not like some others (XGI anyone).
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
Originally posted by: evolucion8
Originally posted by: Fox5
Originally posted by: vaccarjm
What could we do as consumers to keep Nvidia producing Faster cards every 6 months? They now have control as to when they wanna release the next gen card hence milk the market with the 8800gtx. All those peeps waiting on the ati card will now be buying the 880gtx that came out last year.

You can buy their newest card in droves when it first comes out, then everyone needs to stop buying 6 months afterwards, then buy again when a new product comes out, but only if it meets your criteria.

Originally posted by: Pugnate
Originally posted by: Amuro
Originally posted by: OneOfTheseDays
ATI and Nvidia have been neck-and-neck ever since 3D graphics became a reality on the PC.
I thought nVidia had been kicking ATi's ass for years until ATi released the 9700Pro?

ATi started with all the ass kicking until 3Dfx and Nvidia were born.

My favorite ATi card though was the 8500. It provided the same performance as the competition at a $100 less. I loved them for it.

The time they truely owned was the 9700. I can remember the PR guy actually excited at the unveiling.

He said something along the lines of,"It is my job to tell you we are offering a revolutionary product. But this time I am not lying."

ATI wasn't really in the 3d space for very long until 3dfx and nvidia showed up, and I don't think they were particularly kickass.
ATI has always had overpromised and underperforming parts (overly ambitious engineering) until the 9700 pro, which was an outside development anyway.
Their voodoo killers were all duds. Their geforce killer couldn't compete. Their geforce 2 killer fell by the wayside very quickly. Their geforce 3 killer tried to take on the geforce 4. They always released 6 months too late, until the 9700 pro. Didn't take long for ati's management to get ArtX's engineers on the traditional ATI path.

You forgot that ATi kept it's technology leadership for quite a while. The Radeon 7500 was the first chip with hybrid DX8.0 support (Albeit useless), also was the first company what implemented 2 GPU's on same PCB, and then the 8500 was the first card whith DX 8.1 support and no GeForce 3 or GeForce 4 was more technically advanced than it, even though it's performance was lacking, was able to outperform the GeForce 3 by a small margin and barely stay competitive with the GeForce 4 Ti 4200, they driver support in that time was laughable. Even though the R300 was developed by ArtX and finished by ATi, they developed the R520 which is far more advanced than any GeForce 6/7 series of card and now the R600 which is a bit more advanced with some nice implementations like tesselations (It would look like Truform is back eh?, but isn't) and more complex shaders unit that are able to execute up to 5 operations on each, a bit hard to accomplish by the software. But thanks to both companies, for staying competitive for a long time, not like some others (XGI anyone).

I think it's still going to come down to which card is faster with actual DX10 games. When we see Crysis or UT3 running in DX10 and benchmarks then I will decide which card is better. By then both sides should have ample time to put together a good driver.
 

40sTheme

Golden Member
Sep 24, 2006
1,607
0
0
Originally posted by: Cookie Monster
They haven't lost the war.

They may have lost the battle, but they haven't lost the war just yet. Look at how nVIDIA survived the FX era. (although they lost an arm and a leg in doing so)

I thought the OP was on vacation..

Exactly. Wait until games actually take advantage of the processing power of the 2900XT. There's alot of potential there, but nothing is utilizing it thus far.
I'm holding on to my 7950GT until I see how things progress with nV/ATi.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: 40sTheme
Originally posted by: Cookie Monster
They haven't lost the war.

They may have lost the battle, but they haven't lost the war just yet. Look at how nVIDIA survived the FX era. (although they lost an arm and a leg in doing so)

I thought the OP was on vacation..

Exactly. Wait until games actually take advantage of the processing power of the 2900XT. There's alot of potential there, but nothing is utilizing it thus far.
I'm holding on to my 7950GT until I see how things progress with nV/ATi.

I will do the same with my X1950XT, both of us got card for a while, even that before cause when DX9 debuted, the only thing needed to use it was only a card and DX9 runtime. Now to DX10 requieres the card, the operating system and a good machine to run Vista properly with whopping 2GB of RAM.

So the DX10 adoption will be slower than the DX9 adoption, after all DX9 brought brand new things to the table like Floating Point data, pixel shaders and HDR, and the DX10 also uses Floating Point data and double the HDR precision which is quite useless considering that 64-Bit FP Blending can offer more than 65,536 values which only 30,000 can be seen by the human eye. http://www.digit-life.com/articles2/video/r520-part6.html

"Are 16-bits sufficient?" Here is a comparison for non-believers: the dynamic range of the 16-bit integer format is 65535, while the dynamic range of a human eye, adapted to the environment, is about 30000. Camera's or monitor's dynamic range is much narrower. In those rare cases, when the dynamic range of the integer format is insufficient, FP16 filtering can be emulated in a shader. But that's an exception from the rule. "

DX10 will bring only performance to make the tittles looks better, in simple terms, DX10 will make games playable with the graphics of DX9 tech demos like Ruby etc, something very hard to do with DX9 which even struggles with titles like S.T.A.L.K.E.R. that runs slow and doesn't even come close to a DX9 tech demo.
 

yacoub

Golden Member
May 24, 2005
1,991
14
81
Too late guys AMD lost the war, didn't you hear? It's over. They're giving their technology to Matrox and packing it in. Severance pay is an R600 with beta drivers and a kiss from Hector Ruiz.
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
Originally posted by: Nelsieus
Originally posted by: Extelleron
Originally posted by: Nelsieus
Originally posted by: Extelleron
Originally posted by: AmdInside
I am really curious to know what ATI's margain is for the Radeon 2900XT. Since the X800 series, ATI's profot margains have been terrible, worse than the GeforceFX days for NVIDIA. I sure wish someone could interview AMD and find out why they had so much trouble taping out the Radeon 2900XT. Wasn't it supposed to originally launch in November of last year?

Well, the HD 2900 has 700~ million transistors and the G80 has 690M~, and since the 2900 is on a smaller (80nm) process it should, at least in theory, be cheaper for ATI to produce.

Mid-range and low-end R600 parts will be cheaper for ATI to produce because they are on a 65nm proces, and nVidia's 8500/8600 are on a 80nm process.

Actually, I believe x2900XT is roughly 420mm^2 on 80nm compared to 484mm^2 of G80 on 90nm. In other words, G80 is only about 13% bigger than R600, yet retails for about 27% more (along with that similiar match in performance). Therefore, it's quite likely that despite the die size advantage of R600, G80 is driving higher margins (which nVidia even stated via their latest conference call).

ATI has traditionally had lower margins in their GPU sectors throughout the past few years. I believe their corporate margin last fall (prior to AMD's buyout) was in the low 30's. Even though R600's margins are most likely higher than this, they aren't high enough to bring up an already low corporate margin average. For ATI, that might not seem like a big deal. But for AMD, it's a little more problematic.

But I think there is some hope in regards to R630. If ATI can get it out by the end of June with competitive performance to G84 (which shouldn't be hard), then that will definately be a turn-around for them. ALthough I'm not sure how quickly nV might rebuttal with their own die shrink plans. At the moment, I haven't seen in news on such a launch spoiler.

Nelsieus

Not all G80 parts are selling for more than HD 2900XT. 8800GTS 320/640 cards are still G80 just with units disabled... it doesn't cost nVidia anything less to produce an 8800GTS core than an 8800GTX core. With nVidia selling 8800GTS's with 320MB of memory for as low as $250~, their profits can't be that great, especially considering the majority of G80's sold are likely the GTS models.

Well first off, you have to realize the Geforce 8800GTS has a 320-bit interface compared to the 384-bit of the GTX. Furthermore, it's VRAM is cut down (especially 320MB of the GTS vs. the 768MB GTX). Volume is much higher at this segment, as well (sub $300).

According to their cc last week, corporate margins are at all time high levels for nVidia (45%), also stating that GF8800 margins were high, too.

As mentioned above, as well, the more mature process and higher yields you have, the cheaper it is to produce. I've seen no indication that either of these factors are less than optimal right now for the G80 line.

Nelsieus

Not to mention that the GTS cores are 8800GTX cores that couldnt make the cut. They're essentially making money of otherwise defective cores.
 

notanotheracct

Senior member
Aug 2, 2005
299
0
0
Originally posted by: yacoub
Too late guys AMD lost the war, didn't you hear? It's over. They're giving their technology to Matrox and packing it in. Severance pay is an R600 with beta drivers and a kiss from Hector Ruiz.

open mouth, with tongue, or cheek?