• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

GF 104 is 366mm2

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
So I guess you did not read the whole review? Just call me names and hope that will change the facts. Out of 6 benchmarks the 5870 wins 2 breaks even on 1 and loses 3.

So unless you are calling Xbit a troll site, maybe something should be done about your insults.
They are becoming one with reviewing AMD SSAA against NV MSAA.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,812
1,550
136
Wreckage has been a troll in the past, may still be a troll for all I know, but right now he's arguing with data, not trolling. Why punish good behavior?
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
You all miss the point of Xbitlabs Review, they trying to show that GF1xx perfoms much better with DX-11 and TESSELLATION enable.

At this settings, GTX460 matches or overcome the HD5870 except in metro 2033 and I believe in DX-11 games with tessellation enable GTX460 is very close if not in front.

Without Tessellation HD5870 is clearly no much for the GTX460.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
You all miss the point of Xbitlabs Review, they trying to show that GF1xx perfoms much better with DX-11 and TESSELLATION enable.

At this settings, GTX460 matches or overcome the HD5870 except in metro 2033 and I believe in DX-11 games with tessellation enable GTX460 is very close if not in front.

Without Tessellation HD5870 is clearly no much for the GTX460.

But the main issue is that the links that I posted shows that the difference ins't like int he Xbit labs review, Fermi is better suitable for Tessellation, but even with their Tessellation article it explains the following;

"In terms of pure performance drops in regards of AMD and Nvidia architecture it is really hard to judge which architecture tessellation algorithms prefer at the moment. In case of S.T.A.L.K.E.R.: Call of Pripyat, Aliens vs. Predator and Colin McRae: Dirt 2 the overall drop in fps is either very small or remains the same across the board. At the same time, when tessellation workload increases, Nvidia’s Fermi products demonstrate a slightly smaller tendency to lose frames. Stone Giant and Heaven benchmarks, as well as Metro 2033 video game clearly prefer Santa Clara designed products. On average, Nvidia based products are from 5 to 10% more efficient in terms of hardware tessellation calculations. But you have to remember that both Stone Giant and Heaven are synthetic benchmarks without any video game implementation at the moment, and Metro 2033 is just one game, which is hardly a determinative factor."

Source: http://www.xbitlabs.com/articles/video/display/hardware-tesselation_11.html#sect0

It is far from twice the Tessellation power that nVidia once claimed compared to AMD's Cypress architecture.
 

Paratus

Lifer
Jun 4, 2004
17,650
15,846
146
So back on topic:

Question. Why can AMD cram ~ 10 percent more trannies into ~ 10 percent less space. Is it the design in Cypress vs Fermi or is it engineering skill?
 

DrPizza

Administrator Elite Member Goat Whisperer
Mar 5, 2001
49,601
167
111
www.slatebrookfarm.com
Thanks, Paratus - Everyone else, please get back on topic. And, please stop calling people trolls who, at least in this thread, weren't trolling. Thanks. -Admin DrPizza
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
So back on topic:

Question. Why can AMD cram ~ 10 percent more trannies into ~ 10 percent less space. Is it the design in Cypress vs Fermi or is it engineering skill?

Fermi is not just for gaming, thats the answer. It does more or was designed to do more.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
So back on topic:

Question. Why can AMD cram ~ 10 percent more trannies into ~ 10 percent less space. Is it the design in Cypress vs Fermi or is it engineering skill?






ass kisser. um, I mean, yeah happy medium is right. fermi was designed to take over the entire computer world, wash your dishes, do cpu calculations, perform scientific calculations, all while also being able to act as a gpu. cypress was designed to, uh, well, render video images.
 
Last edited:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Fermi was created as a GPGPU processor with videocard elements hehe, Cypress was the opposite, a videocard with GPGPU elements, that's why per mm2, it offers more performance. The GF104 at such size, it still bigger than Cypress and not faster than the HD 5870. Hopefull a fully enabled GF104 will perform better. Interesting that a fully enabled GF104 with less stream processors can come so close of the GF100.
 
Last edited:

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Will the gtx 475 (gf104 with all cores enabled) still be a 336mm2 chip? Or does it get bigger? That should do rather well vs the 5870 right?
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
ass kisser. um, I mean, yeah happy medium is right. fermi was designed to take over the entire computer world, wash your dishes, do cpu calculations, perform scientific calculations, all while also being able to act as a gpu. cypress was designed to, uh, well, render video images.
That doesn't make sense. In terms of performance/watt, it makes sense, performance/transistor, and performance/mm^2.

However, I don't get how fitting transistors into tighter spaces would be because of that. Are simpler execution units also easier to physically cram into tighter spaces than more robust ones? If so, will AMD face issues trying to cram them all into small dice as time goes by? Or, did nVidia just care that much, since they were having such a time getting anything 40nm out in volume?
 
Last edited:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
That doesn't make sense. In terms of performance/watt, it makes sense, performance/transistor, and performance/mm^2.

However, I don't get how fitting transistors into tighter spaces would be because of that. Are simpler execution units also easier to physically cram into tighter spaces than more robust ones? If so, will AMD face issues trying to cram them all into small dice as time goes by? Or, did nVidia just care that much, since they were having such a time getting anything 40nm out in volume?

For some reason, AMD had always being able to fit more transistors within a die space than nVidia, Intel is the leader here though. They doubled everything with the RV770 within the same 55nm process and the chip only grew about 30 percent, then they did the same thing again but with a new 40nm process, and the chip increased in size from 266mm2 to 334mm2 and the transistor budget went from 956m to 2.15B which is a feat because the transistor budget went higher than twice, but the chip size didnt. The GF104 using the same 40nm only crammed 1.95B and the die size is bigger, big transistor savings due to lower stream processor amounts and FP64Bit support cut from 2 of the 3 cuda Blocks. The execution resources of the current AMD architecture takes considerably less space than nVidia's approach

Will the gtx 475 (gf104 with all cores enabled) still be a 336mm2 chip? Or does it get bigger? That should do rather well vs the 5870 right?

The size stays the same since its the same chip, its like the GTX 470 which is basically a fused off GTX 480, both shares identical chip and die size, the same goes for the GTX 460 and whatever new SKU comes with the GF104. And by the way, the nVidia's chip is 366mm2, still slightly higher than AMD's Cypress 334mm2, but seems that adding superscalar stuff to the GF104 paid off somewhat being quite close in performance per mm2 compared to Cypress.
 
Last edited:

happy medium

Lifer
Jun 8, 2003
14,387
480
126
The size stays the same since its the same chip, its like the GTX 470 which is basically a fused off GTX 480, both shares identical chip and die size, the same goes for the GTX 460 and whatever new SKU comes with the GF104. And by the way, the nVidia's chip is 366mm2, still slightly higher than AMD's Cypress 334mm2, but seems that adding superscalar stuff to the GF104 paid off somewhat being quite close in performance per mm2 compared to Cypress.

Thanks, group hug? :)
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Interesting that a fully enabled GF104 with less stream processors can come so close and can come so close of the GF100.

I have been saying that all along though.
GF100 is pretty much like the Pentium 4. It was designed to reach a certain goal, but limitations in the manufacturing process prevented the chip from ever reaching that goal.

nVidia quickly learned from their mistakes and applied some very effective tweaks to the architecture in the GF104.

I'd also like to say: while ATi crams more graphics power in fewer transistors, nVidia's designs use less power per transistor. nVidia's past few generations have been extremely large chips (G80, G92, GT200, GF100/GF104), but only the GF100 is extremely powerhungry.
The others were generally doing quite well in terms of performance/watt despite a disadvantage of having more transistors/larger die than their competitors.

But if you add it all up, Fermi really can't be compared to Cypress. It has so many features that ATi doesn't have (full C++ support in GPGPU, proper double precision floating point, error detection and correction, parallel tessellator etc) that it's not a fair comparison at all. It's apples and oranges. You can argue that these things don't make it a better gaming card until you're blue in the face, but that doesn't take away the fact that these features are implemented in the silicon.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
But if you add it all up, Fermi really can't be compared to Cypress. It has so many features that ATi doesn't have (full C++ support in GPGPU, proper double precision floating point, error detection and correction, parallel tessellator etc) that it's not a fair comparison at all. It's apples and oranges. You can argue that these things don't make it a better gaming card until you're blue in the face, but that doesn't take away the fact that these features are implemented in the silicon.

The problem is that Fermi was indeed competing in the same market and so being compared to Cypress and those extra features don't appeal to the majority of the buyers in that market.

GF104 is clearly an advance in terms of performance/watt, performance/die size and especially performance/dollar over the GF100.

The big question is why aren't the prices of 5850 and 5830 dropping more and all around (as opposed to some etailers in US)?
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
But if you add it all up, Fermi really can't be compared to Cypress. It has so many features that ATi doesn't have (full C++ support in GPGPU, proper double precision floating point, error detection and correction, parallel tessellator etc) that it's not a fair comparison at all. It's apples and oranges. You can argue that these things don't make it a better gaming card until you're blue in the face, but that doesn't take away the fact that these features are implemented in the silicon.
So just what are we supposed to compare Fermi to, if not Cypress? It's a totally fair comparison. Those items you listed do add value to Fermi and may sway some people to purchase one instead of an equivalent ATi card. But just because Nvidia chose to add those extra features doesn't change the fact that its main purpose is still to render frames for playing video games. The rest is just window dressing. Useful? Yes. Necessary? No.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
The problem is that Fermi was indeed competing in the same market and so being compared to Cypress and those extra features don't appeal to the majority of the buyers in that market.

No, the problem is that people are trying to compare transistorcount and such, while the chips are massively different designs with massively different features and capabilities.

I don't see Fermi competing with Cypress as a problem. Do you?
The thing that people don't understand is that it's not like Intel vs AMD, where both make x86 processors, which are nearly identical. They are drop-in replacements of eachother.

This is more like OS X vs Windows. They compete, and to a certain extent one can be used instead of the other... but there are also massive architectural differences, and very specific features for either.
You can't make a direct comparison.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
So just what are we supposed to compare Fermi to, if not Cypress? It's a totally fair comparison. Those items you listed do add value to Fermi and may sway some people to purchase one instead of an equivalent ATi card. But just because Nvidia chose to add those extra features doesn't change the fact that its main purpose is still to render frames for playing video games. The rest is just window dressing. Useful? Yes. Necessary? No.

I'm just saying that you can't compare things like transistorcount and such, when the two are so obviously different.
It's like comparing the linecount of the sources of OS X vs Windows.
Whichever has the least lines is 'better'? That's nonsense, as the OSes are very distinct.
Obviously you don't waste resources on features you don't implement.
In such a case, linecount or transistorcount is completely meaningless.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
I don't see Fermi competing with Cypress as a problem. Do you?
The thing that people don't understand is that it's not like Intel vs AMD, where both make x86 processors, which are nearly identical. They are drop-in replacements of eachother.

This is more like OS X vs Windows. They compete, and to a certain extent one can be used instead of the other... but there are also massive architectural differences, and very specific features for either.
You can't make a direct comparison.

I don't agree.

The prime market for a GeForce or a Radeon family card of the performance level of a Juniper/Cypress/GF100/GF104 is to play games that, mostly, use DX9/10/11 API and to achieve enjoyable frame rates and image quality.

So I can really drop a GTX 460 or a 5850 in exactly the same slot, in exactly the same machine and play exactly the same games.

How AMD and NVIDIA comply with DX and OpenGL requirements is quite immaterial.

It is obvious that stuff like transistors per mm^2, die size and number of SPs are only relevant for the consumers if it translate in things like price and power consumption.

But we can clearly see what frame rate a card achieve, what power it consumes (and heat it generates) to achieve that and what level of IQ is playable.

I know you are interested in other features, but you represent a minority market compared to the market that buys Radeons and GeForces (especially >$100 cards) to play games.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
How AMD and NVIDIA comply with DX and OpenGL requirements is quite immaterial.

That's not the point.
The point was transistorcount. And THEN these things DO matter.

I know you are interested in other features, but you represent a minority market compared to the market that buys Radeons and GeForces (especially >$100 cards) to play games.

I disagree.
Things like Cuda/DirectCompute/OpenCL/PhysX and tessellation are very much a part of gaming experience, which you claim is the primary market.
Frankly I'm through with discussing this. If people don't want to see it, that's their loss. More and more games are using these features, and that's a simple fact.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
That's not the point.
The point was transistorcount. And THEN these things DO matter.



I disagree.
Things like Cuda/DirectCompute/OpenCL/PhysX and tessellation are very much a part of gaming experience, which you claim is the primary market.
Frankly I'm through with discussing this. If people don't want to see it, that's their loss. More and more games are using these features, and that's a simple fact.

Which games? This year has been far more less exciting for PhysX than anything else...
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
I disagree.
Things like Cuda/DirectCompute/OpenCL/PhysX and tessellation are very much a part of gaming experience, which you claim is the primary market.
Frankly I'm through with discussing this. If people don't want to see it, that's their loss. More and more games are using these features, and that's a simple fact.

Still more and more systems are using a Radeon card according to the latest market share numbers.

All the rest is the same story we've been listening to for the last 3 years or so.

When/If those features become important, people will pay for them - doing it before it is needed in a market where every few months you can get faster products for about the same price is insane.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Which games? This year has been far more less exciting for PhysX than anything else...

I'm going with this. I've been hearing one iteration or another of the physx pile for three years now.

Every year it's the same steaming pile about the great new physx title coming out and how groundbreaking it is. It has been three years, I've seen one game that had some decent effects due to physx... in three years. I can count on both hands the number of games that use gpu physx as well, three years later.

This year we're hearing about Mafia 2, next year it will be some other title nvidia does a song and dance about.

Physx is a non-starter, it's proprietary, does not add anything meaningful that has not already been seen on a CPU, and at its heart is just a method to try and sell more video cards. Just look at the supposed physx requirements of Mafia 2.

It's a fog clouding the forest that is actual framerate performance. That is what is tangible. Physx has had three years now to establish its self, it hasn't. When are we going to stop hearing about it.

Nvidia makes some solid performing cards, the difference between my 5870CF and 480SLI setup at my resolution was respectable and tangible when pushing max settings with AA. They need to focus on selling cards on more of the same, solid numbers. The whole physx thing is starting to border on offensive to me as a consumer.

I'm not an idiot, I'm not buying anything because of the vapor that is physx, and I'm tired of nvidia trying to sell me hardware with unfounded claims around the greatness of physx.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Wow, PhysX is such a touchy subject to some.
It's just one of many GPGPU applications. I also mentioned OpenCL/DirectCompute. Some games use DirectCompute in DX11 for post-processing effects. That's GPGPU aswell (and nVidia seems to be doing quite okay in such titles).
Did anyone get that? Or is PhysX just blinding you completely?