[SemiAccurate] Nvidia's Fermi GTX480 is broken and unfixable

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
The fact that it has 48 ROPs and could be clocked at 750 MHz means it will be too expensive for the consumer and for nvidia to stay and business. It's also going to be probably be hot as hell. I used to be an nvidia fanboy, but ATi is getting better all the time.
 
Last edited:

Mr. Pedantic

Diamond Member
Feb 14, 2010
5,027
0
76
I don't think AMD is getting progressively better, they still have a lot of issues to work on with driver support. It's just that NVidia's doing such a bad job lately it feels like AMD are getting better because there's nobody else to compare them to.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
I don't think AMD is getting progressively better, they still have a lot of issues to work on with driver support. It's just that NVidia's doing such a bad job lately it feels like AMD are getting better because there's nobody else to compare them to.

While I disagree with you on the driver comment, I do agree that I think a big part of the reason AMD looks so good is because of Nvidia's poor showing as of late. It seems AMD has their foot firmly on the gas whereas Nvidia has stalled out. AMD has gotten their entire announced Radeon 5xxx line up launched before Nvidia has even gotten out a benchmark of a Fermi based part. When you consider that AMD and Nvidia both launched their first last gen cards within a week or two of each other it's kind of amazing how far ahead AMD has gotten themselves.

Really, you have great execution on AMD's part coupled with delays and what appears to be a lack of execution on Nvidia's part. I have a feeling Nvidia aimed too high. If I remember correctly, Nvidia didn't like introing a new architecture on a new process technology for exactly this reason... they had problems like this in the past. I guess they figured they could pull it off. But, if Fermi comes out and ends up being a monster then I think most of us will forget about the delay compared to the 58xx cards. But then again, by the time it launches it should be faster given the larger amount of silicon and the fact that it's launching so much later. Depending on when Fermi and AMD's refresh comes out you could almost consider Fermi not even in the same gen as the 58xx parts.
 
Last edited:

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Sadly, I'm starting to think Charlie is on to something here.

Even if they couldn't launch until March, CES would have been the time to show off some actual benchmarks. I'm thinking the reason they didn't is because they didn't have final clocks yet. The lack of any specific clocks/benches this late in the game does not bode well for Fermi IMO.
 

ravensharpless

Junior Member
Feb 17, 2010
1
0
0
I really hope that article isn't accurate.

even if it is accurate it wont stop me from buying a gtx480.
Im used to gmod were when i make something it has to break to work. And every once and a wile it crashes the game. If anything i would rather have a graphics card that pw0ns everything with a horable crunching squiling enging sound (so to speak) then have a clean running ATI card with crapy drivers.(no offence to ATI fans. No rely, i respect ATI. I would never want to see them go and i would never want to see nvidia owed by intel)
 

ShadowOfMyself

Diamond Member
Jun 22, 2006
4,227
2
0
I dont get the Charlie bashing... Sure he is biased, he spells too much doom and gloom for Nvidia when really, it would take a whole lot more for Nvidia to go under

But that is a well written article with factual points, you cant just call him a troll and ignore it... To any of the people flaming Charlie, has anyone said exactly why you think hes pulling stuff out of his ass?

I will keep my wait and see mindset, instead of just bashing the guy when he might be completely right
 

extra

Golden Member
Dec 18, 1999
1,947
7
81
Ugh hopefully not true. Take it with grain of salt. As someone who buys whatever card I perceive to be the best for my needs, it's going to be really annoying if prices continue to suck (i have a 5770 now).
 

blanketyblank

Golden Member
Jan 23, 2007
1,149
0
0
If it's true and they really can only release 10,000 cards before moving on to a different design I imagine support is going to suck for a while since there won't be much motivation to improve an EOL model. Hopefully he's completely wrong, but you'd think something would have been heard by now if they are releasing in March.

Then again is March even certain? They may be able to delay a bit longer to get decent numbers of units.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106

That was a good read. Thanks.

I thought the following part was interesting (even though it has nothing to do with pooled memory):

From the article said:
Low price for 4850x2 must be achieved thru some other cuts in production process besides usage of cheaper GDDR memory. We expected that GPUs for 4850x2 cards are taken from edges wafer while those from center of wafer are used for faster 4870x2 series. Mr. Marinkovic denied this and pointed us to some other ways of cutting production cost: “Most of GPUs used for both cards are the same so final work frequency depends on rest of elements used on circuit board. In production process of 4870x2 series we use top quality (pricier) components that allow higher GPU frequencies. Thanks to cheaper components used in manufacturing process of 4850x2 cards ATi achieved lower price and because of this GPU on these cards cannot achieve working frequencies of 4870x2. Beside GPU frequency, bandwidth and memory frequency are key factors in determination of final performances for graphics card. When you sum up all this you get cheaper but slower product”.

So it sounds like "bin quality" between HD4850x2 and HD4870x2 didn't differ. It was differences in circuit board quality/components that allowed the higher clock speeds.
 

Mr. Pedantic

Diamond Member
Feb 14, 2010
5,027
0
76
While I disagree with you on the driver comment
AMD was let off the hook somewhat with the poor drivers because of TSMC's botch-up of 40nm. I read the articles about Catalyst 10.2 and 10.3; basically everything in there (which, by the way, is being hailed as an awesome move on AMD's part and justifiably took this long to implement) should have been in place from first day consumers could get their hands on Evergreen chips, or at the latest, in the Catalyst update the month after. People spending over $800 on graphics cards should not be 'rewarded' by not being able to run Eyefinity with their 5870s in CF. It is unacceptable for it to have taken this long. This is not to mention the gray screen issues and flickering encountered by a lot of people.

Yes, their execution with regards to hardware was superb; what was recently revealed in the Anandtech RV870 article highlights that. However, the hardware is really nothing without the software, and I wish that AMD would spend just a little more on making sure that everything they say will be ready on launch day is ready.

Really, you have great execution on AMD's part coupled with delays and what appears to be a lack of execution on Nvidia's part. I have a feeling Nvidia aimed too high. If I remember correctly, Nvidia didn't like introing a new architecture on a new process technology for exactly this reason... they had problems like this in the past. I guess they figured they could pull it off. But, if Fermi comes out and ends up being a monster then I think most of us will forget about the delay compared to the 58xx cards. But then again, by the time it launches it should be faster given the larger amount of silicon and the fact that it's launching so much later. Depending on when Fermi and AMD's refresh comes out you could almost consider Fermi not even in the same gen as the 58xx parts.
I have a feeling that NVidia has not aimed too high. Rather, they've gotten it completely wrong. As AMD has effectively shown, it is no longer enough for a graphics card manufacturer to say they have the fastest card no matter how they get there, and expect to reap the substantial and disproportionate rewards of that position. Now that Crossfire and SLI scaling is so close to theoretical perfection, the single-GPU halo does not matter so much as performance/watt and performance/dollar - because people can always just switch to two, or three, or even four (admittedly, this is rather uncommon) smaller, cooler, more conservative GPUs rather than having to rely on one megalithic monster to provide the same performance.
 

MagickMan

Diamond Member
Aug 11, 2008
7,460
3
76
So NVidia won't answer them just because Charlie asked them? What about if Anand asked them? Editors from Tom's Hardware, Bit-tech, [H]ard|OCP, B3D, etc? What about if I asked them? Just because Charlie asked them does not mean they are any less worthy questions, nor does it mean that the public don't need to know the answers.

They won't answer them because he's an asshat.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
AMD was let off the hook somewhat with the poor drivers because of TSMC's botch-up of 40nm. I read the articles about Catalyst 10.2 and 10.3; basically everything in there (which, by the way, is being hailed as an awesome move on AMD's part and justifiably took this long to implement) should have been in place from first day consumers could get their hands on Evergreen chips, or at the latest, in the Catalyst update the month after. People spending over $800 on graphics cards should not be 'rewarded' by not being able to run Eyefinity with their 5870s in CF. It is unacceptable for it to have taken this long.

If you read the Anandtech RV870 article it explains why this happened.

"Sunspot" the internal name for Eyefinity was kept a complete secret up until a few weeks before release. The software guys at ATI didn't know about it till the last minute.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I have a feeling that NVidia has not aimed too high. Rather, they've gotten it completely wrong. As AMD has effectively shown, it is no longer enough for a graphics card manufacturer to say they have the fastest card no matter how they get there, and expect to reap the substantial and disproportionate rewards of that position. Now that Crossfire and SLI scaling is so close to theoretical perfection, the single-GPU halo does not matter so much as performance/watt and performance/dollar - because people can always just switch to two, or three, or even four (admittedly, this is rather uncommon) smaller, cooler, more conservative GPUs rather than having to rely on one megalithic monster to provide the same performance.

You make a good point about Crossfire/SLI scaling, but lots of experienced people say they shy away from these solutions because of "microstutter".
 

sandorski

No Lifer
Oct 10, 1999
70,874
6,409
126
If you read the Anandtech RV870 article it explains why this happened.

"Sunspot" the internal name for Eyefinity was kept a complete secret up until a few weeks before release. The software guys at ATI didn't know about it till the last minute.

I'd also add that certain Features(like Eyefinity in CF) were announced as a future implementation right from the start. Anyone should/could have known that certain features were yet to come.
 
Dec 30, 2004
12,553
2
76
As usual Charlie probably vastly overstates the problems, but he points to some valid sources and Anand said similar things in his RV870 article so I fear there could be some valid points in his rambling :(


We better all hope the best if we want cheaper chips in the near future..

Eh, this article looks like a complete rehash of Anands. All over. It's like he read it, and then decided "ok let me spin this in the worst possible light for Nvidia".

Do I think he is wrong? No. For two main reasons, die size and jumping into 40nm with a brand new architecture (brand new architecture is bad enough). I think he's right-- but that doesn't change that this article is a complete rehash of Anand's with a terribly-doomsday slant.

When Nvidia does get their card fixed, they're going to have one heck of an efficient, scalable design. Out of Order execution on a GPU? Unheard of. From an engineering perspective it's a beauty. I'm confident they've got the design [logic] right-- but they've probably got tons of problems thanks to TSMC's 40nm issues.

I'm betting on a paper launch. If the benches are lacking I'm probably going to jump on a 5870-- so that they can't raise the price on me. If the cards have great availability and stay in stock and AMD's prices drop, I'll return the card and re-buy.
 
Dec 30, 2004
12,553
2
76
TSMC's process might not be the best (ok its certainly far from the best) but some blame must be laid on nvidia.

Both ati and nvidia knew the process wasn't great. Ati engineered around it. Nvidia just complained about it.

http://www.semiconductor.net/article/438968-Nvidia_s_Chen_Calls_for_Zero_Via_Defects-full.php

You can't engineer around random defects, other than making your chip smaller.
As SA pointed out, they can't exactly section off less than 32 shaders. With their design it's not like Nvidia had a choice. And their design is great.
It's just a b**** to manufacture.
 
Dec 30, 2004
12,553
2
76
I dont get the Charlie bashing... Sure he is biased, he spells too much doom and gloom for Nvidia when really, it would take a whole lot more for Nvidia to go under

But that is a well written article with factual points, you cant just call him a troll and ignore it... To any of the people flaming Charlie, has anyone said exactly why you think hes pulling stuff out of his ass?

I will keep my wait and see mindset, instead of just bashing the guy when he might be completely right

I get the Charlie bashing, but for completely different reasons. It read like a complete rehash of Anand's article in several positions, yet 2nd rate (when he started referring to things like "specifically, channel width"), I mean, LOL, he copied that RIGHT from Anand's paper.

Disgusting.

Like I said 2 posts ago, I think he's right, but this article sucks.
 
Dec 30, 2004
12,553
2
76
That was a good read. Thanks.

I thought the following part was interesting (even though it has nothing to do with pooled memory):



So it sounds like "bin quality" between HD4850x2 and HD4870x2 didn't differ. It was differences in circuit board quality/components that allowed the higher clock speeds.

Yeah I'm thinking more expensive capacitors (more expensive dielectric), to filter the signals better. Better filtering = cleaner clocks/cleaner edges, less noise, so you can clock faster.
 
Dec 30, 2004
12,553
2
76
My last thought on all this is that we haven't said a word about the driver behind this. Maybe because it's been delayed so long it won't be a problem (software guys have had plenty of time haha), but when you re-engineer your product you have to redesign the driver model as well.

Nvidia's software group generally seems pretty competent, but that doesn't mean we should completely ignore that they might not get all the performance possible out of it immediately.
 

shangshang

Senior member
May 17, 2008
830
0
0
even if it is accurate it wont stop me from buying a gtx480.
Im used to gmod were when i make something it has to break to work. And every once and a wile it crashes the game. If anything i would rather have a graphics card that pw0ns everything with a horable crunching squiling enging sound (so to speak) then have a clean running ATI card with crapy drivers.(no offence to ATI fans. No rely, i respect ATI. I would never want to see them go and i would never want to see nvidia owed by intel)

You're interesting.
Many spelling mistakes.
Many grammar mistakes.
recently joined AT, 1st post, jumped right in a "AMD vs NV" debate
Me think you could be a lurker, but prolly a banned NV fanboy now reappearing with new nick
 

OneOfTheseDays

Diamond Member
Jan 15, 2000
7,052
0
0
From what I've heard about Nvidia from ex-employees the disaster known as Fermi shouldn't be a surprise to anyone.

Management @ Nvidia apparently thinks yelling at their employees and working them longer hours will fix their problems. There is no doubt there are bright engineers at Nvidia, but without the right management and proper execution strategy it's all for naught.

ATI now has a sizable advantage over Nvidia. They can plan their next-gen and have it ready by the time Nvidia actually comes out with Fermi.
 
Dec 30, 2004
12,553
2
76
From what I've heard about Nvidia from ex-employees the disaster known as Fermi shouldn't be a surprise to anyone.

Management @ Nvidia apparently thinks yelling at their employees and working them longer hours will fix their problems. There is no doubt there are bright engineers at Nvidia, but without the right management and proper execution strategy it's all for naught.

ATI now has a sizable advantage over Nvidia. They can plan their next-gen and have it ready by the time Nvidia actually comes out with Fermi.

can you expound on the poor treatment of employees, I haven't heard a thing about that.

AMD works their employees hard too.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
So NVidia won't answer them just because Charlie asked them? What about if Anand asked them? Editors from Tom's Hardware, Bit-tech, [H]ard|OCP, B3D, etc? What about if I asked them? Just because Charlie asked them does not mean they are any less worthy questions, nor does it mean that the public don't need to know the answers.

they will answer them because they are legitimate reporters. Charlie got blacklisted because he is a crazy vindicative hack. Why should nvidia have to deal with such a person? you get a restraining order. Just because the nutcase can come up with a valid question once a blue moon doesn't mean you have to talk to him. If it is such a good question someone more cordial will ask it and you answer that person.

Nvidia employees are people, and they shouldn't have to deal with jerks like charlie.