Anyone care to revisit an old discussion?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
Originally posted by: bryanW1995
Originally posted by: Idontcare
Originally posted by: SlowSpyder
Cliffs:
AMD GPU's are smaller than Nvidia GPU's but performance is similar.

It seems to me that you are assuming both architectures have been equally optimized in their respective implementations when making comparisons that involve things like die-size.

Let me use an absurd example to show what I mean.

Suppose NV's decision makers decided they were going to fund GT200 development but gave the project manager the following constraints: (1) development budget is $1m, (2) timeline budget is 3 months, and (3) performance requirements were that it be on-par with anticipated competition at time of release.

Now suppose AMD's decision makers decided they were going to fund RV770 development but gave the project manager the following constraints: (1) development budget is $10m, (2) timeline budget is 30 months, (3) performance requirements were that it be on-par with anticipated competition at time of release, and (4) make it fit into a small die so as to reduce production costs.

Now in this absurd example the AMD decision makers are expecting a product that meets the stated objectives, and having resourced it 10x more so than NV did their comparable project, one would expect the final product to be more optimized (fewer xtors, higher xtor density, smaller die, etc) than NV's.

In industry jargon the concepts I am referring to here are called R&D Efficiency and Entitlement.

Now of course we don't know whether NV resourced the GT200 any less than AMD resourced the RV770, and likewise for Fermi vs. Cypress, but what we can't conclude by making die size comparisons and xtor density comparisons is that one should be superior to the other in those metrics without our having access to the necessary budgetary informations that factored into the project management aspects of decision making and tradeoff downselection.

This is no different than comparing say AMD's PhII X4 versus the nearly identical in die-size Bloomfield. You could argue that bloomfield shows that AMD should/could have implemented PhII X4 as a smaller die or they should/could have made PhII X4 performance higher (given that Intel did)...or you could argue that AMD managed to deliver 90% of the performance while only spending 25% the coin.

It's all how you want to evaluate the metrics of success in terms of entitlement or R&D efficiency (spend 25% the budget and you aren't entitled to expect your engineers to deliver 100% the performance, 90% the performance is pretty damn good).

So we will never know how much of GT200's diesize is attributable to GPGPU constraints versus simply being the result of timeline and budgetary tradeoffs made at the project management level versus how similar tradeoff decisions were made at AMD's project management level.

good point, but if anything here nvidia is the one with 4x the R&D budget. how bad would it be to come in 6 months late, be larger, AND cost 4x the R&D budget? that could be the ultimate gpu trifecta.

Right.... I completel understand your post IDC, but it's been tossed around here that Nvidia spends more on R&D than AMD is worth as a company. With Nvidia being under the impression before the GTX2x0/48x0 launch that they were going to own the highend, I would think they knew they'd be selling a LOT of chips, so any savings per chip would add up to a very significant amount.

Selling GTX280's by the tens of thousands, maybe hundreds of thousands, it would make much more sense for them to try and make it as small as they can, say to save an average of $20/die. Just pulling numbers out of my ass. But a few bucks times thousands and thousands of GPU's, that adds to the bottom line.

I can't find it right now, but do we know how many transistors are in an RV770? Isn't ti about 250mm2? We know the GTX285 is about 1.4billion transistors, and 460mm2 (sorry for using both internal code names and product names, I don't know what the 55nm shrink of the GT200 is... GT200b?). Do their sizes scale as we'd expect as the transistor count increases?

I guess what I'm getting as is that we hear that the GT200 was aimed for GPGPU since the beginning, but we still don't know how, or if it really even was and it was just marketing by Nvidia to try and explain why their large chip performs near a much smaller chip. To the end user it doesn't matter, but if they are trying to explain to shareholders why their cost is higher it might.

Thanks for the 20% number Keys, I just wish there were more details thatn that I guess.


Originally posted by: Keysplayr
Originally posted by: HurleyBird
Rule 1:

Don't feed the troll.

Not matter how stupid his comment, or how brilliant your response to him, Wreckage wins the moment you decide to reply to one of his inflammatory posts.
Ignoring a troll has never caused a thread to derail, guys.

Seconded. Ignore from this point on?

Agreed by me. I will completely ignore his posts in this thread unless they are relevant to the conversation.
 

SSChevy2001

Senior member
Jul 9, 2008
774
0
0
Originally posted by: Genx87
Originally posted by: SSChevy2001
The Nvidia card is just doing PhysX calculations, not rendering.

That's like AMD saying well we don't support Nvidia GPUs with our CPUs, because it might cause problems.

Funny how it works just fine once the limitations are removed by a patch.
http://www.youtube.com/watch?v=Fgp1mYRYLS0

Except what could a CPU calculate that would cause a rendering issue? So it really isnt the same now is it?
What could a PhysX card calculate that would cause a rendering issue?

Again the only thing a PhysX PPUs or dedicated PhysX GPU do is offload physx calcutations from the CPU, they're not used for rendering.

So yes it is the same.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Originally posted by: SlowSpyder
Right.... I completel understand your post IDC, but it's been tossed around here that Nvidia spends more on R&D than AMD is worth as a company. With Nvidia being under the impression before the GTX2x0/48x0 launch that they were going to own the highend, I would think they knew they'd be selling a LOT of chips, so any savings per chip would add up to a very significant amount.

As does Intel over AMD, R&D money is invested to accomplish different objectives, depending on the decision makers (decide what projects to fund) and the project managers (decide what features and timeline the project's milestones will entail).

Intel spends lots of money making sure that their products will be able to command an ASP that entitles them to except >50% gross margins. AMD does not, but they invest less to begin with so they aren't entitled to 50% gross margins either.

(I'm assuming folks here understand what the term entitlement means in the industry)

Originally posted by: SlowSpyder
Selling GTX280's by the tens of thousands, maybe hundreds of thousands, it would make much more sense for them to try and make it as small as they can, say to save an average of $20/die. Just pulling numbers out of my ass. But a few bucks times thousands and thousands of GPU's, that adds to the bottom line.

It is all about tradeoffs. Would you rather spend $1m of R&D budget on developing a way to reduce the cost per chip by $2 or would you rather spend $1m on developing an architecture feature that is expected to improve the ASP by $5? If you are Intel and you have the budget you are likely inclined to invest in both, but if you are AMD or NV your operating margins don't really afford you the luxury of doing that, you are going to have to choose based on priorities.

Given NV's diesize we can guess that they chose to prioritize elevating GPGPU performance over cost reduction...but at best we can only guess at what the actual decision making process was at the project management level.

Originally posted by: SlowSpyder
I can't find it right now, but do we know how many transistors are in an RV770? Isn't ti about 250mm2? We know the GTX285 is about 1.4billion transistors, and 460mm2 (sorry for using both internal code names and product names, I don't know what the 55nm shrink of the GT200 is... GT200b?). Do their sizes scale as we'd expect as the transistor count increases?

I guess what I'm getting as is that we hear that the GT200 was aimed for GPGPU since the beginning, but we still don't know how, or if it really even was and it was just marketing by Nvidia to try and explain why their large chip performs near a much smaller chip. To the end user it doesn't matter, but if they are trying to explain to shareholders why their cost is higher it might.

An alternative approach to this gedankenexperiment is rather than attempting to prove the assertion (hypothesis), we do the opposite and attempt to rationalize the ramifications of the assertion being untrue.

For example rather than trying to locate proof that the GT200 is larger than RV770 solely due to GPGPU architecture requirements (and by extension making the assumption that lack of proof implies GT200 is larger for no good reason) you could instead start with the assumption that the GT200 is NOT larger than RV770 for any reason related to GPGPU requirements and proceed to list the ramifications of this assumption.

For instance this assumption would then carry with it the requirement that the GT200 is needlessly large for effecting strictly GPU purposes, as RV770 provides proof of that, and so we must also conclude Nvidia created a needlessly large die that was a waste of resources and R&D budget.

If we are to believe that NV spent 4x more developing GT200 as AMD spent developing the RV770 we must then conclude that the engineers and project management at NV are significantly incompetent at effecting their duties. While I have no doubt such a view is easily adoptable by folks who have never worked in a professional setting I doubt many people who have worked in a professional setting would really accept the proposition that the people working at NV are that different than the people working at AMD in terms of skillset and capability.

Versus say resigning themselves to the more likely conclusion that if the products appear to have discrepancies in their attributes then perhaps those differences are merely attributable to nothing more than differing priorities at the project management level when normalized against timeline and budgetary considerations as well as risk appetite of executive management.

So what are you more inclined to believe? That NV really is just poorly managed and is stocked with IQ deficient project managers and engineers who could do no better than develop a bloated GT200 chip that performed at best to that of an RV770, or that perhaps the GT200 really was developed with a set of priorities and program objectives that perhaps exceeded that of the RV770 and as such resulted in a larger xtor budget and production cost allowance?

If this were a conversation regarding Istanbul versus Dunnington how would you view the rationalizations of xtor count differences, performance differences, release timeline differences, R&D budgetary differences for these two 6-core cpu's? Would you seek to attribute to incompetence that which could be explained simply by presuming differing priorities were at play during the project definition phase years and years ago for each chip?
 

HurleyBird

Platinum Member
Apr 22, 2003
2,684
1,268
136
Originally posted by: IdontcareIf we are to believe that NV spent 4x more developing GT200 as AMD spent developing the RV770 we must then conclude that the engineers and project management at NV are significantly incompetent at effecting their duties. While I have no doubt such a view is easily adoptable by folks who have never worked in a professional setting I doubt many people who have worked in a professional setting would really accept the proposition that the people working at NV are that different than the people working at AMD in terms of skillset and capability.

I think you're vastly oversimplifying things. One disappointing product (ex. NV30 G80) doesen't prove that an engineering team is incompetent, and one great product (ex. G80, RV770) doesen't prove the opposite. There are always outliers, and if you want to judge an engineering team you have to look at the trend, as well as their resources.

However, I think the difference in size is at least partially explained looking at the design philosophies for each architecture:

NVs shaders run at a much higher clockspeed than AMD's, and in general the higher the target clockspeed the further apart you want transistors to limit signal interference which obviously lowers density. Nvidia's 1 ALU/thread design also has a higher transistor cost than AMD's 5 ALU/thread design (as AMD can cut down on some redundant elements in each ALU) for the same theoretical performance, the downside is that AMD's design is very reliant on driver to attain good practical performance. And of course, when your target is a monster 512-bit interface you may not be thinking as much about transistor efficiency than if you're targeting a much lower size. Lastly, AMD needs less ROPs/TMUs since theirs are clocked higher, possibly because much of NVIDIA's energy budget might be being spent on getting their shaders running at such a high speed (easy test would be clocking the shaders down to the level of the core, and overclocking both by the same amount at the same time to see how much further the core clock can go). In general NVIDIA has superior TMUs, while AMD has superior ROPs, and testing performance hits with increasing levels of AA/Aniso will show that to be true.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: HurleyBird
Lastly, AMD needs less ROPs/TMUs since theirs are clocked higher, possibly because much of NVIDIA's energy budget might be being spent on getting their shaders running at such a high speed (easy test would be clocking the shaders down to the level of the core, and overclocking both by the same amount at the same time to see how much further the core clock can go). In general NVIDIA has superior TMUs, while AMD has superior ROPs, and testing performance hits with increasing levels of AA/Aniso will show that to be true.

The ATi's TMU's are also superior, even though theorically in benchmarks, nVidia TMU are great performers outperforming ATi, in real life, ATi's TMU performance is much closer to their theorical performance than nVidia.

GT200 80 TMU's doesn't make the GT200 to run twice faster than the RV770 40 TMU's. (Besides of other factors in the game like shaders, etc)
 

HurleyBird

Platinum Member
Apr 22, 2003
2,684
1,268
136
Originally posted by: evolucion8
Originally posted by: HurleyBird
Lastly, AMD needs less ROPs/TMUs since theirs are clocked higher, possibly because much of NVIDIA's energy budget might be being spent on getting their shaders running at such a high speed (easy test would be clocking the shaders down to the level of the core, and overclocking both by the same amount at the same time to see how much further the core clock can go). In general NVIDIA has superior TMUs, while AMD has superior ROPs, and testing performance hits with increasing levels of AA/Aniso will show that to be true.

The ATi's TMU's are also superior, even though theorically in benchmarks, nVidia TMU are great performers outperforming ATi, in real life, ATi's TMU performance is much closer to their theorical performance than nVidia.

I think you have it kind of backwards.

Mind you that that RV770 does less AF work than G200, which in turn does less than RV870.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: HurleyBird
I think you have it kind of backwards.

Mind you that that RV770 does less AF work than G200, which in turn does less than RV870.

Yeah, but what I mean is that in syntethic benchmarks like pixel/texture fillrate, the GT200 scores considerably higher than the RV770, but in real life, the impact in performance using AF on both cards is minimal.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,684
1,268
136
Did you read the link? A ~15% performance hit for RV870 running Crysis at 16x AF is hardly minimal, I wouldn't describe it as a 'crippling' loss in performance, but it is substantial. G200's <5% performance hit on the other hand, is what I envision when I think of a 'minimal' performance hit.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: IdontcareSo what are you more inclined to believe? That NV really is just poorly managed and is stocked with IQ deficient project managers and engineers who could do no better than develop a bloated GT200 chip that performed at best to that of an RV770, or that perhaps the GT200 really was developed with a set of priorities and program objectives that perhaps exceeded that of the RV770 and as such resulted in a larger xtor budget and production cost allowance?

Well, from everything that both companies have bandied about since 4xxx and gt200 released, ati chose the small ball strategy with 3xxx and stuck with it, while nvidia chose the high end strategy with g80 and has also stuck with it. Many ati veterans left over disputes in their strategy and it nearly ripped the company apart, but they appear to be reaping huge dividends from it now. Nvidia has not exactly hidden their desire to branch out into other areas besides mid to high end gpus, and their strategy could very well end up being the best one in the long term. However, their strategy is very complicated because they have multiple objectives (gpu performance AND gpgpu performance) while ati has to only focus on their "one thing" of straight gpu performance.

Btw, I don't really know if 4x the R&D budget is accurate, benskywalker should be able to refute/prove that postulate, but I do know that he has stated on many recent occasions that nvidia's r&d budget was higher than ati's total revenue last year. Part of ati's r&d is probably charged to the cpu division as they work on fusion, possibly a VERY large part.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Originally posted by: HurleyBird
Originally posted by: IdontcareIf we are to believe that NV spent 4x more developing GT200 as AMD spent developing the RV770 we must then conclude that the engineers and project management at NV are significantly incompetent at effecting their duties. While I have no doubt such a view is easily adoptable by folks who have never worked in a professional setting I doubt many people who have worked in a professional setting would really accept the proposition that the people working at NV are that different than the people working at AMD in terms of skillset and capability.

I think you're vastly oversimplifying things.

Gee, ya think?

I gave a couple hypothetical examples of those tradeoffs, intentionally simplified so people wouldn't get lost in trying to follow the train of thought in understanding how entitlement and R&D efficiency permeate the decision making process at the executive and project management level.

I'm writing a sub-single-page post in a public forum...ya think it might require one to make a few simplifications given the breadth of the subject matter? :laugh:

I thought I had gone to sufficiently great lengths to make it obvious that the point of my post is that project management of these sorts entail so many tradeoffs (development cost, DFM, risk to timeline, risk to deliverables, etc) aimed at maximizing so many non-publicly stated program objectives (production cost, performance targets, ASP entitlement, etc) that we simply cannot reduce the assessment to such products to a few arbitrary metrics of evaluation like diesize or xtor counts and expect to extract anything truly meaningful (i.e. right for the right reasons) from such analyses.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,684
1,268
136
No, I mean your deductive process was oversimplified, and that caused you to make logical errors. According to your post as I understand it, your reasoning is such:

1. If G200's huge die size is not caused by an increase in GPGPU, then G200 is 'needlessly large' for the performance it gives.
2. If Nvidia also spent significantly more on R&D than AMD (as Nvidia bragged of doing before the G200 release) then the Nvidia engineering team is incompetent.
3. Either Nvidia's team is incompetent or they had different design priorities (like GPGPU) that increased xtor budget and decreased density.
4. Because Nvidia's engineering team cannot possibly be that incompetent, the later is most likely true.

All of your reasoning from point 2 onward is flawed because you are ignoring the existence of outliers that are exceptionally bad or good such as NV30 or G80. You can't just take budget, performance, and size for one product and decide whether an engineering team is competent or not, that's like taking two years out of the temperature record to argue whether global warming is a threat or not. In order to make such a conclusion you need to look at the trend, not one or two isolated incidents.
 

crazylegs

Senior member
Sep 30, 2005
779
0
71
Originally posted by: Wreckage
Originally posted by: SlowSpyder
AMD GPU's are smaller than Nvidia GPU's but performance is similar.

Are Nvidia GPU's really more aimed towards GPGPU?
Not really. NVIDIA was faster in games than ATI, in fact the GTX295 is still the fastest card available. The HD4xxx series was always behind.

The extra GPGPU capability is just icing on top of the gaming cake. Look how well Batman AA plays when PhysX is enabled.

With many games playable on even mid range cards, you need to offer your customers something more. I think this is why NVIDIA outsells ATI 2 to 1.


HOW is he allowed to get away with this?

:(

This looked like a really interesting thread, for about 3 posts. The 4th completely ruined it, derailing with total FUD.

Person A claims:

''AMD GPU's are smaller than Nvidia GPU's but performance is similar.'' FACT. Note the key facts 'SMALLER' and 'SIMILAR PERFORMANCE'

how the hell is:

''Not really. NVIDIA was faster in games than ATI, in fact the GTX295 is still the fastest card available. The HD4xxx series was always behind.''

A legitimate response?????

Then to round it off:

''I think this is why NVIDIA outsells ATI 2 to 1. ''

Where is the relevance to the thread? There is none.
 

alyarb

Platinum Member
Jan 25, 2009
2,444
0
76
haha i know.

it's like saying "AMD's cards are only about 3% slower than nvidia's, yet way less costly."

"not really. nvidia is the fastest. in fact they still have the fastest card today...."

i used to have a voicebox where you press buttons and it will automatically play quotes from the austin powers movies. i think that's what this is.

there will be a new thread soon with OP "Looking for low-profile card for slimline HTPC. I see the 4350 is only $30, is that good enough?"
wreckage will come in....: "Not really. NVIDIA is faster in games than ATI. In fact they still have the fastest card available. these days, you have to offer your customers more. nvidia does that. check out batman with physx."
 

yh125d

Diamond Member
Dec 23, 2006
6,907
0
76
Originally posted by: Wreckage
Originally posted by: v8envy


I'm assuming you are talking about anti-aliasing, while I am talking about PhysX as it relates to GPGPU and how that can apply to gaming.

Look at Folding@home for example. NVIDIA out performs ATI by a large margin.

Theres definitely no refuting that, my old 9800gt got more PPD than my 4890. My question is, was the difference so huge because of the underlying architectural design of nvidia's chips as of late being more optimized for GPGPU than ATI's, or because of nvidia's better driver optimization for popular GPGPU apps, or is it some combo?
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Originally posted by: HurleyBird
No, I mean your deductive process was oversimplified, and that caused you to make logical errors. According to your post as I understand it, your reasoning is such:

1. If G200's huge die size is not caused by an increase in GPGPU, then G200 is 'needlessly large' for the performance it gives.
2. If Nvidia also spent significantly more on R&D than AMD (as Nvidia bragged of doing before the G200 release) then the Nvidia engineering team is incompetent.
3. Either Nvidia's team is incompetent or they had different design priorities (like GPGPU) that increased xtor budget and decreased density.
4. Because Nvidia's engineering team cannot possibly be that incompetent, the later is most likely true.

All of your reasoning from point 2 onward is flawed because you are ignoring the existence of outliers that are exceptionally bad or good such as NV30 or G80. You can't just take budget, performance, and size for one product and decide whether an engineering team is competent or not, that's like taking two years out of the temperature record to argue whether global warming is a threat or not. In order to make such a conclusion you need to look at the trend, not one or two isolated incidents.

Wow, thats what you took away from reading my post? I learn something every day. Thanks for sharing.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: yh125d
Originally posted by: Wreckage
Originally posted by: v8envy


I'm assuming you are talking about anti-aliasing, while I am talking about PhysX as it relates to GPGPU and how that can apply to gaming.

Look at Folding@home for example. NVIDIA out performs ATI by a large margin.

Theres definitely no refuting that, my old 9800gt got more PPD than my 4890. My question is, was the difference so huge because of the underlying architectural design of nvidia's chips as of late being more optimized for GPGPU than ATI's, or because of nvidia's better driver optimization for popular GPGPU apps, or is it some combo?

my understanding of the situation is that nvidia's cards are easier to adapt to programs like f&h, and they had a LOT more cards sold for the past couple of years, so the f@h team focused on them recently. AMD's 1 "big" and 4 "little" sp's are supposedly more difficult to optimize for as well, so it is very hard to tell which card is inherintly superior. If nvidia lags too far behind with gt300 we might see the f&h team spend more time optimizing for amd cards and a turn of the tables, but never fear it is unlikely to happen until the NEXT gen comes out...
 

alyarb

Platinum Member
Jan 25, 2009
2,444
0
76
Protein folding is a great example of heavily data-dependent work. you can vectorize it until the cows come home, but there is still going to be lots of recursion to deal with. i'm going to take a little snapshot of a good chart/paragraph from anand's and derek's first RV770 paper.

http://dl.getdropbox.com/u/594924/rv770_threading.PNG

The phrase anand uses, "has the potential to be slower...or faster," is exactly how to sum up the differences in geometric thread encapsulation compared to AMD/nVIDIA. a few big shaders vs. more numerous, more independent smaller shaders. nvidia's layout is just plain superior here. each shader handles fewer threads and can be smaller. it's better to have a bigger TPC sharing a common cache/thread manager than to have 160 5-way shaders all taking instructions from the same thread manager. nvidia has better compartmentalization and encapsulation, and it affords them a good compromise no matter what the logical structure of your program may be. nVIDIA has two layers of clustering. The groups of SPs form an SM and each SM has a thread manager and cache. Three SMs form a TPC, and each TPC has a bigger thread manager and a bigger cache. These are not execution units, but it takes transistors for them to function. Simply dividing your shaders into distinct logical clusters with each hierarchical transition moderated by a traffic cop also costs transistors, and they are less xtor-dense than your functional units, so the overall density decreases.

AMD on the other hand comes with nothing but vast execution resources. The radeon has one layer of logical clustering. 16 shaders in a SIMD core each get a cache and thread manager, and ten of those get instructions from the global managers. Density is higher because there is much less compartmentalization and vastly more execution resources, and in the unlikely case that you need to execute 800 totally independent scalar threads, it'll blow nvidia away. no such useful program exists, however, and the more data dependencies you have, the more nvidia's architecture will match and eventually surpass that of the radeon. Therefore nvidia's chip-bloating thread management strategy is indeed inspired by the achievement of realistic performance goals.

Geometry setup and pixel shading are not good examples of data dependent work, which is why the smaller, less versatile radeons can hold their own in Direct3D. nVIDIA is trying to leverage their architecture to expand into computing markets that are incredibly more diverse than 3D rendering and that means they need to implement architectural strategies several generations ahead of time.

it was good that AMD split RV870 into hemispheres with two independent thread setups, but i'm not sure the 5-way shader can last forever since data dependencies are more often the norm than the exception. feel free to shoot my logic down if something is wrong with it.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,684
1,268
136
Originally posted by: IdontcareWow, thats what you took away from reading my post? I learn something every day. Thanks for sharing.

Well, instead of playing the asshole card you could tell me what you were really trying to say then, because your post reads exactly like the points I listed, allow me to demonstrate:

Originally posted by: Idontcareyou could instead start with the assumption that the GT200 is NOT larger than RV770 for any reason related to GPGPU requirements and proceed to list the ramifications of this assumption.

For instance this assumption would then carry with it the requirement that the GT200 is needlessly large for effecting strictly GPU purposes, as RV770 provides proof of that, and so we must also conclude Nvidia created a needlessly large die that was a waste of resources and R&D budget.

1. If G200's huge die size is not caused by an increase in GPGPU, then G200 is 'needlessly large' for the performance it gives.

Originally posted by: IdontcareIf we are to believe that NV spent 4x more developing GT200 as AMD spent developing the RV770 we must then conclude that the engineers and project management at NV are significantly incompetent at effecting their duties.

2. If Nvidia also spent significantly more on R&D than AMD (as Nvidia bragged of doing before the G200 release) then the Nvidia engineering team is incompetent.

Again, this is the part where it looks like you're making a wild assumption. One underpeforming does not give you enough evidence to conclude the engineering staff is incompetent.

Originally posted by: IdontcareVersus say resigning themselves to the more likely conclusion that if the products appear to have discrepancies in their attributes then perhaps those differences are merely attributable to nothing more than differing priorities at the project management level when normalized against timeline and budgetary considerations as well as risk appetite of executive management.

So what are you more inclined to believe? That NV really is just poorly managed and is stocked with IQ deficient project managers and engineers who could do no better than develop a bloated GT200 chip that performed at best to that of an RV770, or that perhaps the GT200 really was developed with a set of priorities and program objectives that perhaps exceeded that of the RV770 and as such resulted in a larger xtor budget and production cost allowance?

3. Either Nvidia's team is incompetent or they had different design priorities (like GPGPU) that increased xtor budget and decreased density.
4. Because Nvidia's engineering team cannot possibly be that incompetent, the later is most likely true.

Seems pretty straightforward to me, but go ahead and tell me what you really meant to say.
 

Vertibird

Member
Oct 13, 2009
43
0
0
I have a simple question.

Is there something subjectively better about having Physx enabled in game? The reason I mention this is because lots of times people just compare frame rates @ X and Y graphics settings.
 

alyarb

Platinum Member
Jan 25, 2009
2,444
0
76
it's just a way to make things appear more real even though they are not from an interactive standpoint. frankly i think physx is just another form of eye candy, and it requires a series of developments in order to mature. sure, fragmentation from some explosions looks better, cloth in motion looks better; that kind of thing. but you still don't have widespread implementation of NPCs interacting with phsysicalized objects. PhysX has done nothing to incorporate physicalized objects into gameplay devices any more than proprietary physics engines have. it's just another light show.