• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

NV: Everything under control. 512-Fermi may appear someday. Yields aren't under 20%

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
To say nV learned something about 40nm, at the same time inferring that somebody is wrong for saying the opposite, is a bit off.

Say what now?

There AT article backs up the claims that AMD learned something that nV didn't. Unless there is another article that can tell us otherwise.

What you might mean to say is, "despite" what Nvidia learned, they went ahead and kept going with what they needed/wanted to do. Despite the challenges, despite the perceived difficulties. And I must say at the price of more heat and power consumption, they did extremely well on a very problematic 40nm process.
 
You are correct; the issue is with your understanding. You are clearly not qualified to discuss Fermi in regard to "design screw ups"

i know for a fact that Nvidia - including their engineers - is quite proud of GTX 480/470

GTX 480 absolutely kicks ass at 8xAA over the HD 5870 at high resolutions; it has better overclocking headroom and scaling than it does also.

What makes that a "disaster" - except perhaps to a noise princess on a strict energy budget?
😀

I just read you article about 8xAA performance. It looks like the Performance gap stays about the same to me.
 
Nice fallacy..but how does that relate to DIE SIZE?
(hint: it dosn't [SIC]))

While this wasn't directed at me, anomolies in the cache location are easily fixed by avoiding that memory location in the cache (which is why chip designs have more cache memory than spec, to account for process anomolies.) Anomolies in the logic are much harder to fix, since you lose far more by fusing off logical units (since each is much larger than a memory cell in Cache.)

I am suprised by your comment, but I do enjoy the new functions in this forum to ignore users. I will have to see how inflamatory your future comments are to see if this function will be useful.
 
What you might mean to say is, "despite" what Nvidia learned, they went ahead and kept going with what they needed/wanted to do. Despite the challenges, despite the perceived difficulties. And I must say at the price of more heat and power consumption, they did extremely well on a very problematic 40nm process.

I said what I meant to say and that wasn't it.

You seem to be pretty certain that Nv learned what AMD learned without anything to back you up.
 
While this wasn't directed at me, anomolies in the cache location are easily fixed by avoiding that memory location in the cache (which is why chip designs have more cache memory than spec, to account for process anomolies.) Anomolies in the logic are much harder to fix, since you lose far more by fusing off logical units (since each is much larger than a memory cell in Cache.)

I am suprised by your comment, but I do enjoy the new functions in this forum to ignore users. I will have to see how inflamatory your future comments are to see if this function will be useful.

Why don't you go right ahead?
Menaawhile you can stop spinning my comment.
Because if you can't figure out reticle limit in chip lithography, it won't be a loss for me...so go ahead :thumbsdown:
 
I just read you article about 8xAA performance. It looks like the Performance gap stays about the same to me.

Look at 2560x1600 resolution; page 20 is very convenient as there is a Summary chart

Look at UT3, BattleForge, Dirt 2, Just Cause 2, Crysis, FC2 ..

... look at what someone called a "disaster" has sure fixed the weakness of GT200b series 😛

As to "what AMD learned", that article is now out of date and it only presented one side. You will have to look for an update on another site with Nvidia's story when it is time to tell it.
 
Last edited:
I said what I meant to say and that wasn't it.

You seem to be pretty certain that Nv learned what AMD learned without anything to back you up.
I'd sure hope not, otherwise it just makes them look even dumber for still shooting themselves in the foot.
 
Look at 2560x1600 resolution; page 20 is very convenient as there is a Summary chart

Look at UT3, BattleForge, Dirt 2, Just Cause 2, Crysis, FC2 ..

... look at what someone called a "disaster" has sure fixed the weakness of GT200b series 😛

As to "what AMD learned", that article is now out of date and it only presented one side. You will have to look for an update on another site with Nvidia's story when it is time to tell it.

Agreed that fermi is far from a disaster. but until nV disclose their design decisions (Highly doubt that will happen) we really don't have any other point of view.

Keys is assuming alot by saying nV learned the same stuff AMD learned without much reason to think so.
 
You're confusing being "dumb" with being aggressive. AMD took the easy way out.

I wouldn't call easy. I don't think you are in the position to say what they did with evergreen was "easy".

Smart maybe. . . they didn't have any delays, they launched with the specs they promised and the do the fastest card atm. but thats not the discussion here.
 
All the arguing over who knew what when and what they did with it, is kind of dumb.
Because "here we are" with a very nice offering from NV despite all their difficulties and challenges.

You wont hear anything from me about it being a bad chip, but I would like to know what makes you think nV knew what AMD knew. Im not really arguing with you.
 
While this wasn't directed at me, anomolies in the cache location are easily fixed by avoiding that memory location in the cache (which is why chip designs have more cache memory than spec, to account for process anomolies.) Anomolies in the logic are much harder to fix, since you lose far more by fusing off logical units (since each is much larger than a memory cell in Cache.)

I am suprised by your comment, but I do enjoy the new functions in this forum to ignore users. I will have to see how inflamatory your future comments are to see if this function will be useful.

Yea, most of us are having a decent discussion, but his posts are always on the arguementive side.

But whatever, Fermi can grow even larger because Intel did it with a chip that is mostly cache. :/
 
While this wasn't directed at me, anomolies in the cache location are easily fixed by avoiding that memory location in the cache (which is why chip designs have more cache memory than spec, to account for process anomolies.) Anomolies in the logic are much harder to fix, since you lose far more by fusing off logical units (since each is much larger than a memory cell in Cache.)

I am suprised by your comment, but I do enjoy the new functions in this forum to ignore users. I will have to see how inflamatory your future comments are to see if this function will be useful.

Theres a function to ignore users?
 
I couldnt resist replying, are you guys discussing ljonberg?

I have him tagged as a troll, jsut look at his post history. Cant see what he writes, so i assume the new functions work.
 
Agreed that fermi is far from a disaster. but until nV disclose their design decisions (Highly doubt that will happen) we really don't have any other point of view.

Keys is assuming alot by saying nV learned the same stuff AMD learned without much reason to think so.


Well, if you really look very carefully, you can see their design decisions
- you can see Nvidia's very public disappointment with the vias at TSMC and later on with the yields (when Jensen went to Taiwan)

You can see that they are heading for a GPU that can *also* do amazing graphics - they are not interested in "just video cards" 😛

We DO know their design choices; i learned beginning with Nvision08, continuing on through their GTC last year and also at CES. It is very public if you pay careful attention.

AMD is heading in another direction but staying for now with traditional video cards. Next year is Fusion; Anand covered that pretty well in his article.

Nvidia's "Fermi story" won't be fully written until they bring out their entire GF100 lineup and even then it will only cover "graphics performance in gaming"
- we have yet to see their Fermi Quadro and Tesla lineup
 
Last edited:
You are correct; the issue is with your understanding. You are clearly not qualified to discuss Fermi in regard to "design screw ups"

Says who...? While I'm certainly not an ASIC design lead it seems I already know more about it than a lot of you, funny NV-defenders. ()🙂

i know for a fact that Nvidia - including their engineers - is quite proud of GTX 480/470
You mean all those overworked foreign guys with heavy accents (FWIW like me)? They will be always proud, I virtually guarantee you - sans me, mind you - as Cali is pretty much Shangri-La compared to Bangalore... until they can leave feedback anonymously, of course.
And the higher-ups a reviewer like you likely get in touch with, will never admit any defeat. Never ever. It simply does not exist in Nvidia's culture which is purely based on arrogance and silly, childish chest-thumping (see GD feedbacks about incompetent mid- and high-level management.)

GTX 480 absolutely kicks ass at 8xAA over the HD 5870 at high resolutions; it has better overclocking headroom and scaling than it does also.
Better OC headroom? 😱
Scaling? 😱

WHat did you just say, who's not qualified to speak about what...?

What makes that a "disaster" - except perhaps to a noise princess on a strict energy budget?
😀
As someone, whose words I think you absolutely trust, put it eloquently a little bit earlier: "the issue is with your understanding." ()🙂
 
Last edited:
*Where* is the ignore feature for this forum?
--there is some utter nonsense that i do not wish to see nor reply to.
 
And if AMD tried to make a 400+mm^2 part then it would be plagued by the same issues nvidia has since they use the same process... only nvidia has experience working with such issues having built monolithics for years.

This has come up a couple of times.

ATI produced the mid ranged 4770 on 40nm first and ran into exactly the same issues Nvdiia ran into. The 58xx series just wasn't ATI's first time at the 40nm TSMC rodeo.
 
Well, if you really look very carefully, you can see their design decisions
- you can see Nvidia's very public disappointment with the vias at TSMC and later on with the yields (when Jensen went to Taiwan)

You can see that they are heading for a GPU that can *also* do amazing graphics - they are not interested in "just video cards" 😛

We DO know their design choices; i learned beginning with Nvision08, continuing on through their GTC last year and also at CES. It is very public if you pay careful attention.

AMD is heading in another direction but staying for now with traditional video cards. Next year is Fusion; Anand covered that pretty well in his article.

Nvidia's "Fermi story" won't be fully written until they bring out their entire GF100 lineup and even then it will only cover "graphics performance in gaming"
- we have yet to see their Fermi Quadro and Tesla lineup

You guys can do all the prep talk but facts are facts: it's a very expensive 3B tranny chip monster with literally ZERO OC HEADROOM in its current form and yet it's only ~5% faster than it's competitor that OC like crazy and costs $100 less.
In anyone's book outside of NV it's a failure.
 
Better OC headroom? 😱
Scaling? 😱

WHat did you just say, who's not qualified to speak about what...?
Every review I have read says that Fermi scaling is pretty damn good and it has headroom as long as your willing to cool it.
 
This has come up a couple of times.

ATI produced the mid ranged 4770 on 40nm first and ran into exactly the same issues Nvdiia ran into. The 58xx series just wasn't ATI's first time at the 40nm TSMC rodeo.

Exactly. Anand wrote an article about it, Charlie wrote a piece about it etc.
Yet people still spin it like NV didn't do anything dumb and AMD wasn't much smarter... I thought Anand was quite clear on this: http://www.anandtech.com/show/2937/9

For me at least, it sounds pretty clear: NV simply ran into the same problem ATI did with 4770 but for NV it was the new flagship, a magnitude more advanced, 3B giant chip so problems were also much bigger and so is the backlash.
 
Last edited:
Every review I have read says that Fermi scaling is pretty damn good and it has headroom as long as your willing to cool it.

He said compared to 5870 or at least I took it as such, based on the conversation.
Guys, we are talking about a competitive product, let's stop pretending it exists in its own reality. People won't put LN on their VGAs to OC them when the other card can be OC'd with stock cooler. When compared to Cypress Fermi has very little OC room and the sheer size of the chip is probably close to what TSMC willing to mfr (sdans the case if NV is willing to sell chips with huge losses. 🙂)
 
Back
Top