• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Charlie's thoughts on the fermi derivative parts

Dark4ng3l

Diamond Member
http://www.semiaccurate.com/2010/04/05/new-nvidia-code-names-pop/

Considering how close everything he said about fermi ended up being I think this at least warrants some discussion.

That fermi derivatives would be too big to compete effectively with Juniper is something I pointed out in my thread that was locked for no reason a couple of days ago.

GT200 derivatives never materialized and nvidia just used g92 as their mid range part. This time they don't really have a choice as they need DX11 to be competitive feature wise in the mid range.

Considering the difference in size between fermi and cypress vs the performance difference you have to think that the more you cut it down the harder it would be for fermi to be competitive. But a half fermi would at least have a full shader part to sell.

One other interesting thing that I want to note is that when he stated that GT200 was awol everyone laughed at him but then the parts pretty much disappeared from the channels. It remains to be seen but I seriously doubt that nvidia would just ship a few tens of thousands of fermi cards just to save face like that. If anything they would probably take a hit on cards sold just to keep their market share of the high end segment.
 
Interesting read. It's likely IMO that the midrange DX11 parts from nVidia are not going to be very compelling. Gonna need a sub 200 dollar part that outpaces the 5770 without grotesque power usage, and it's gonna need to get here fast.
 
Interesting read. It's likely IMO that the midrange DX11 parts from nVidia are not going to be very compelling. Gonna need a sub 200 dollar part that outpaces the 5770 without grotesque power usage, and it's gonna need to get here fast.

And considering the 5770 is 170mm^2 on a 128bit bus, it doesn't seem that will be likely considering the rumored GF104 @ 324mm^2 w/ 256bit bus.

nVidia needs to cut the fat in order to compete with AMD's gaming products. PhysX and CUDA are nice, but not necessary. If nVidia would approach those features as something a second card would handle instead of an "all-in-one" monster GPU solution, then maybe they could right their ship.

There's no doubt that nVidia can produce performance competitive parts, the problem is their parts aren't cost effective.
 
And considering the 5770 is 170mm^2 on a 128bit bus, it doesn't seem that will be likely considering the rumored GF104 @ 324mm^2 w/ 256bit bus.

nVidia needs to cut the fat in order to compete with AMD's gaming products. PhysX and CUDA are nice, but not necessary. If nVidia would approach those features as something a second card would handle instead of an "all-in-one" monster GPU solution, then maybe they could right their ship.

There's no doubt that nVidia can produce performance competitive parts, the problem is their parts aren't cost effective.

I think you make a compelling argument here.
 
nVidia needs to cut the fat in order to compete with AMD's gaming products. PhysX and CUDA are nice, but not necessary. If nVidia would approach those features as something a second card would handle instead of an "all-in-one" monster GPU solution, then maybe they could right their ship.
I am completely blind as to the technical difficulties this may or may not entail, but I do see a reason for nVidia to actually go ahead and make the mid-range parts leaner, that is if only their high-end parts will be "converted' to Tesla cards. Maybe they needed a compute card on a gaming card for the high end, so they had to take this hit for GF100, but maybe the rest of the lineup, the smaller brothers, may not have to take the same hit.

Now that I think about it though, would that not be like having a completely different architecture? Is that doable at this stage of the game?

We'll see, I guess. nVidia still has quite some time left for the mid-range parts, perhaps they can pull a pleasant surprise for all of us. I'm always hoping for the positive.
 
I think they should chalk this round up to the red team and actually put a good card out next cycle.

They aren't going to pull a profit in this market especially given the price/performance ratio of the Cypress cards.
 
IMO, if the GF104 chips have a better memory controller and its a re spun die, I see no reason why they cant compete with mid range cypress, which is probably exactly what nv engineers believe also, otherwise they wouldnt be bothered to waste the money on it....
 
If NV slices Fermi into half into a midrange, assuming they power draw is also cut by 50% thats still a hefty 125W that require a slot cooler at least.
 
IMO, if the GF104 chips have a better memory controller and its a re spun die, I see no reason why they cant compete with mid range cypress, which is probably exactly what nv engineers believe also, otherwise they wouldnt be bothered to waste the money on it....

Well since the 448 core 470 is no better than a 5850 in most situations I don't see how a 256 core chip can compete with any cypress based card.

These things should be targeted to compete against Juniper probably have the top end part be in the 5770-5830 range and the lower end part comparable to a 5770 to try and fille the gap in ATI's lineup.
 
Well since the 448 core 470 is no better than a 5850 in most situations I don't see how a 256 core chip can compete with any cypress based card.

These things should be targeted to compete against Juniper probably have the top end part be in the 5770-5830 range and the lower end part comparable to a 5770 to try and fille the gap in ATI's lineup.

Ah, I maybe getting confused with mid range cypress and juniper, however I was under the impression that the fermi memory controller was naff and accounts for a certain performance % drop, not to mention a respin may very well fix power leak.....
 
More ignorance of the market....

nvidiaresults.png


Which market?
(NV results for their last 2 financial years according to their SEC filing available from the NV website. Numbers in brackets indicate losses)
 
Nvidia will compete just fine. According to the internet CEO's there was no way a 470 could be prived MSRP at 349. It is, and Nvidia will do fine at that price point. Likewise their derivative parts will do fine in the mid and lower end markets.
 
Interesting read. It's likely IMO that the midrange DX11 parts from nVidia are not going to be very compelling. Gonna need a sub 200 dollar part that outpaces the 5770 without grotesque power usage, and it's gonna need to get here fast.

The 4850 was 260mm, on a 256bit bus and sold for ~$100. I think nV can build a chip with comparable build costs and sell it for twice as much and do OK with it. Another thing to keep in mind most people seem to be forgetting, the 5830, 5770 and 5750 all suck. They are grossly inferior to the parts they replace at each given price point- it isn't like nVidia has to come close to hitting one out of the park to be competitive.

One Charlie comment I have to point out-

This means that the silicon for the GPU, before packaging and testing, costs at least $250 for each part. Once you add all the components and make the card, there is no way the GTX470 and GTX480 can make a profit, given what they are being sold for.

The 480 is selling for $500 and he's saying they can't make a profit? That is so utterly moronic it has to be called out. If a magic fairy was delivering 5830 chips to ATi they would be taking a loss on them using that same logic. A $250 price rift between GPU cost and retail is huge.

Which market?
(NV results for their last 2 financial years according to their SEC filing available from the NV website. Numbers in brackets indicate losses)

The release of the 5xxx parts seemed to boost nV a lot on the financial end.

http://finance.yahoo.com/q/is?s=nvda

Having a price premium during a global recession may not be ideal, but they rebounded from it pretty quickly.
 
http://www.semiaccurate.com/2010/04/05/new-nvidia-code-names-pop/

Considering how close everything he said about fermi ended up being I think this at least warrants some discussion.

*LOL*

First Fermi had a NVIO chip...then it didn't...all bases covered...CHECK!

First Fermi had software tessalation...then it had fixed function tessalation part...the it used it shaders for tesselation...all bases covered...CHECK!

Then the top bin part had 512 cores...then 448....the 480...all bases covered...CHECK!

I bet you that you could guess just a "accurately" without any inside sources...by covering all the bases 🙄

Char-lie is not the problem though, it's people like you (DarkFourngThreel)...
Not only do you think numbers can be spoken as letters, you also try and make Char-lie into something is is not...like he not just "semiaccurate"...but actually accurate *ROFL*
 
The 480 is selling for $500 and he's saying they can't make a profit? That is so utterly moronic it has to be called out. If a magic fairy was delivering 5830 chips to ATi they would be taking a loss on them using that same logic. A $250 price rift between GPU cost and retail is huge.


But nvidia's not getting paid the retail price, is it? No, some profit is being taken by the partners who sell the cards, so the real question is what is nvidia selling the cards to the partners for? That is the metric that determines if the 480/470 are being profitable for nvidia.

But I wouldn't be surprised if they're not making a penny on them given that the upper/high end cards are never really there to make huge profits...that's what the midrange cards are for. Make the name with the high end, sell the midrange to the vast buying public. That and nvidia's Tesla workstation gpu's are probably more the big profit center vs. the 470/480 cards.

High end cards are so niche that it's almost irrelevant except to a tiny minority of enthusiasts that game on PC, a very, very small market, esp. compared to consoles.

So, I don't have a problem with the notion of neither ATI nor nvidia not making huge profits or any profits on their top end products, which is probably close to the truth.
 
I would honestly be surprised if Nvidia's avg cost per die is 250. But even it it were that would still leave some room for a small profit or break even on the consumer cards and a healthy profit on the Quadro and Tesla brands.
 
I generally sway ATI, however I think that the GF104 might be a really good card, especially since they now have experience with the GT240(etc) and GF100. Let's hope they can bring out some killer cards and drop prices for all of us 🙂
 
Oh so 480 is 500 bones ?

That is actually good considering I got my ubbber ATI X800 XT PE for 600 bones.

I do however think CPU and GPU prices are whack. IMO , a CPU should cost more then your GPU,,,, take it from there and see what you can get, Don't buy budget cards 57xx blah blah.. their budget cards. Get a 5870 or 5970 I believe is the model..

nVidia is drunk, their trying to compete with a dual GPU ATI card with their single GPU Furbi. Wait til the GTX 580 comes out,, dual gpu and will be 500 bones probably. Waiting is the key to success lol
 
Nvidia will compete just fine. According to the internet CEO's there was no way a 470 could be prived MSRP at 349. It is, and Nvidia will do fine at that price point. Likewise their derivative parts will do fine in the mid and lower end markets.
Except the 5850 is like $300~ and performs about the same as 470 making it irrelevant.
 
IMO, if the GF104 chips have a better memory controller and its a re spun die, I see no reason why they cant compete with mid range cypress, which is probably exactly what nv engineers believe also, otherwise they wouldnt be bothered to waste the money on it....

With the abandonment of GT200 and G92 wayyy too long in the tooth, nVidia almost has no choice but to soldier on and produce a midrange part.

As I said before, there's no doubt that nVidia can produce a part that has competitive performance, the problem is that its requiring more resources to do so than it is for ATI.

The 4850 was 260mm, on a 256bit bus and sold for ~$100. I think nV can build a chip with comparable build costs and sell it for twice as much and do OK with it. Another thing to keep in mind most people seem to be forgetting, the 5830, 5770 and 5750 all suck. They are grossly inferior to the parts they replace at each given price point- it isn't like nVidia has to come close to hitting one out of the park to be competitive.

The 4850 debuted at $199 MSRP and ATI was pricing their 4800s very aggressively to put a major thorn in nVidia's side considering their chosen $399 and $649 price points.

If nVidia can price their ~320mm^2 + 256bit GF104 parts at and below $200, AMD could do the same with the 5800s @ 334nm^2 +256bit and have the significant performance advantage.

The 5700s suck only when considering their original MSRP. The fact that the 5770 can already be found for as low as $120 paints a much better picture of the potential situation.
 
Back
Top