"AMD's GPU's are smaller than Nvidia's GPU's but performance is similar." - 100% true, not at all meant to be a 'zoner' comment, it's simply true.Originally posted by: Wreckage
I answered both his questions. Why not address that instead of chasing after me?Originally posted by: alyarb
i'm just coming in to say i seriously laughed out loud (i'm by myself) when I read wreckage's response to the OP. he didn't respond to a single question the OP presented.
just GTX 295 is still the fastest. check out batman with physx. Radeon 4000 was always behind, and they are outsold 2:1.
heh, thank you wreckage. everyone appreciates it.
My statements were in direct response to his summary. I think he and the other "zoners" just wanted a one sided discussion.
"AMD GPU's are smaller than Nvidia GPU's but performance is similar.
Are Nvidia GPU's really more aimed towards GPGPU?
"
Both statements are false as any gaming benchmark will show you.
The GTX285 was the fastest GPU, The 295 is still the fastest card. "similar performance" is not true, sorry.Originally posted by: SlowSpyder
- 100% true, not at all meant to be a 'zoner' comment, it's simply true.
Ah now you add "than AMD GPUs". It was more implying that it hampered their gaming performance."Are Nvidia GPU's really more aimed toward GPGPU" (than AMD GPU's).
That's why I pointed to PhysX, Folding@home, video transcoding (without the cpu), the hundreds of CUDA apps, etc, etc. Enough proof to fill a galaxy really.I'd like to know what makes people say that Nvidia GPU's are more GPGPU capable than AMD GPU's. Nvidia's GT200 uses something close to 400 million more transistors for what amounts to very similar performance in the gaming world. Is this extra silicon used for GPGPU? If so, how? How do the changes both companies made to their next gen GPU's add to the abilities of those GPU's to act as GPGPU?
By wreck, you mean give an opposing view point. That's what an open discussion is all about.Thanks again for going out of your way to wreck a thread.
Actually you have it backwards about the video encoding. Nvidia solution depends more on the CPU.Originally posted by: Wreckage
That's why I pointed to PhysX, Folding@home, video transcoding (without the cpu), the hundreds of CUDA apps, etc, etc. Enough proof to fill a galaxy really.
http://www.legitreviews.com/article/978/2/Since the ATI Radeon HD 4770 was faster in the benchmarks it goes to show more work is being off loaded and done on the GPU than the CPU. It seems that on the NVIDIA solution more of the work is being dumped to the processor and the dual-core AMD Athlon X2 4850e isn't that quick.
Hence, his nickname, Wreckage aka nVckage aka Crapckage aka derailckage aka FUDckage etc etc etc...Originally posted by: SlowSpyder
Thanks again for going out of your way to wreck a thread.
good point, but if anything here nvidia is the one with 4x the R&D budget. how bad would it be to come in 6 months late, be larger, AND cost 4x the R&D budget? that could be the ultimate gpu trifecta.Originally posted by: Idontcare
It seems to me that you are assuming both architectures have been equally optimized in their respective implementations when making comparisons that involve things like die-size.Originally posted by: SlowSpyder
Cliffs:
AMD GPU's are smaller than Nvidia GPU's but performance is similar.
Let me use an absurd example to show what I mean.
Suppose NV's decision makers decided they were going to fund GT200 development but gave the project manager the following constraints: (1) development budget is $1m, (2) timeline budget is 3 months, and (3) performance requirements were that it be on-par with anticipated competition at time of release.
Now suppose AMD's decision makers decided they were going to fund RV770 development but gave the project manager the following constraints: (1) development budget is $10m, (2) timeline budget is 30 months, (3) performance requirements were that it be on-par with anticipated competition at time of release, and (4) make it fit into a small die so as to reduce production costs.
Now in this absurd example the AMD decision makers are expecting a product that meets the stated objectives, and having resourced it 10x more so than NV did their comparable project, one would expect the final product to be more optimized (fewer xtors, higher xtor density, smaller die, etc) than NV's.
In industry jargon the concepts I am referring to here are called R&D Efficiency and Entitlement.
Now of course we don't know whether NV resourced the GT200 any less than AMD resourced the RV770, and likewise for Fermi vs. Cypress, but what we can't conclude by making die size comparisons and xtor density comparisons is that one should be superior to the other in those metrics without our having access to the necessary budgetary informations that factored into the project management aspects of decision making and tradeoff downselection.
This is no different than comparing say AMD's PhII X4 versus the nearly identical in die-size Bloomfield. You could argue that bloomfield shows that AMD should/could have implemented PhII X4 as a smaller die or they should/could have made PhII X4 performance higher (given that Intel did)...or you could argue that AMD managed to deliver 90% of the performance while only spending 25% the coin.
It's all how you want to evaluate the metrics of success in terms of entitlement or R&D efficiency (spend 25% the budget and you aren't entitled to expect your engineers to deliver 100% the performance, 90% the performance is pretty damn good).
So we will never know how much of GT200's diesize is attributable to GPGPU constraints versus simply being the result of timeline and budgetary tradeoffs made at the project management level versus how similar tradeoff decisions were made at AMD's project management level.
that's why he's called "wreckage"Originally posted by: SlowSpyder
Thanks again for going out of your way to wreck a thread.
I can think of support issues being a real pita. Especially trying to fix an issue with a competitors card that doesnt support PhysX.Originally posted by: Vertibird
If PhysX is so great why don't they let customers mix cards on mobos? (ie, we buy ATI cards for the superior gaming value, but then mix it with a Nvidia card for Physx)Originally posted by: Wreckage
Not really. NVIDIA was faster in games than ATI, in fact the GTX295 is still the fastest card available. The HD4xxx series was always behind.Originally posted by: SlowSpyder
AMD GPU's are smaller than Nvidia GPU's but performance is similar.
Are Nvidia GPU's really more aimed towards GPGPU?
The extra GPGPU capability is just icing on top of the gaming cake. Look how well Batman AA plays when PhysX is enabled.
With many games playable on even mid range cards, you need to offer your customers something more. I think this is why NVIDIA outsells ATI 2 to 1.
Or maybe Nvidia doesn't want us to use Lucid hydra? But why would they care since they don't even want to make chipsets anymore?
The Nvidia card is just doing PhysX calculations, not rendering.Originally posted by: Genx87
I can think of support issues being a real pita. Especially trying to fix an issue with a competitors card that doesnt support PhysX.Originally posted by: Vertibird
If PhysX is so great why don't they let customers mix cards on mobos? (ie, we buy ATI cards for the superior gaming value, but then mix it with a Nvidia card for Physx)Originally posted by: Wreckage
Not really. NVIDIA was faster in games than ATI, in fact the GTX295 is still the fastest card available. The HD4xxx series was always behind.Originally posted by: SlowSpyder
AMD GPU's are smaller than Nvidia GPU's but performance is similar.
Are Nvidia GPU's really more aimed towards GPGPU?
The extra GPGPU capability is just icing on top of the gaming cake. Look how well Batman AA plays when PhysX is enabled.
With many games playable on even mid range cards, you need to offer your customers something more. I think this is why NVIDIA outsells ATI 2 to 1.
Or maybe Nvidia doesn't want us to use Lucid hydra? But why would they care since they don't even want to make chipsets anymore?
Theoretical performance has always favored AMD if you believe the marketing slides. However, there's plenty of examples where this theoretical performance still hasn't been realized - care to ask F@H 4xxx card owners? Or how about where high performance (read: speed) is realized but output was unacceptable (Google AVIVO Transcoder reviews). Sadly, this transcoding library was what AMD supplied many third parties so their "stream" features end up with similar results.Originally posted by: SlowSpyder
Cliffs:
AMD GPU's are smaller than Nvidia GPU's but performance is similar.
Are Nvidia GPU's really more aimed towards GPGPU?
Filling the execution units of each to capacity is a challenge but looks to be more consistent on NVIDIA hardware, while in the cases where AMD hardware is used effectively (like Bioshock) we see that RV770 surpasses GTX 280 in not only performance but power efficiency as well. Area efficiency is completely owned by AMD, which means that their cost for performance delivered is lower than NVIDIA's (in terms of manufacturing -- R&D is a whole other story) since smaller ICs mean cheaper to produce parts.
How is this article irrelevant? It's not entirely so. One has to understand what has happened and what is happening before attempting to predict what will happen.While shader/kernel length isn't as important on GT200 (except that the ratio of FP and especially multiply-add operations to other code needs to be high to extract high levels of performance), longer programs are easier for AMD's compiler to extract ILP from. Both RV770 and GT200 must balance thread issue with resource usage, but RV770 can leverage higher performance in situations where ILP can be extracted from shader/kernel code which could also help in situations where the GT200 would not be able to hide latency well.
We believe based on information found on the CUDA forums and from some of our readers that G80's SPs have about a 22 stage pipeline and that GT200 is also likely deeply piped, and while AMD has told us that their pipeline is significantly shorter than this they wouldn't tell us how long it actually is. Regardless, a shorter pipeline and the ability to execute one wavefront over multiple scheduling cycles means massive amounts of TLP isn't needed just to cover instruction latency. Yes massive amounts of TLP are needed to cover memory latency, but shader programs with lots of internal compute can also help to do this on RV770.
All of this adds up to the fact that, despite the advent of DX10 and the fact that both of these architectures are very good at executing large numbers of independent threads very quickly, getting the most out of GT200 and RV770 requires vastly different approaches in some cases. Long shaders can benefit RV770 due to increased ILP that can be extracted, while the increased resource use of long shaders may mean less threads can be issued on GT200 causing lowered performance. Of course going the other direction would have the opposite effect. Caches and resource availability/management are different, meaning that tradeoffs and choices must be made in when and how data is fetched and used. Fixed function resources are different and optimization of the usage of things like texture filters and the impact of the different setup engines can have a large (and differing with architecture) impact on performance.
We still haven't gotten to the point where we can write simple shader code that just does what we want it to do and expect it to perform perfectly everywhere. Right now it seems like typical usage models favor GT200, while relative performance can vary wildly on RV770 depending on how well the code fits the hardware. G80 (and thus NVIDIA's architecture) did have a lead in the industry for months before R600 hit the scene, and it wasn't until RV670 that AMD had a real competitor in the market place. This could be part of the reason we are seeing fewer titles benefiting from the massive amount of compute available on AMD hardware. But with this launch, AMD has solidified their place in the market (as we will see the 4800 series offers a lot of value), and it will be very interesting to see what happens going forward.
I'm not arguing on the relevance of the article at the time it was written, but even then, the author concedes that 4xxx hardware varies wildly in real world applications. I'm just pointing out that there hasn't been a flurry of situations since then that have contradicted that point.Originally posted by: cusideabelincoln
Right now it seems like typical usage models favor GT200, while relative performance can vary wildly on RV770 depending on how well the code fits the hardware. G80 (and thus NVIDIA's architecture) did have a lead in the industry for months before R600 hit the scene, and it wasn't until RV670 that AMD had a real competitor in the market place. This could be part of the reason we are seeing fewer titles benefiting from the massive amount of compute available on AMD hardware. But with this launch, AMD has solidified their place in the market (as we will see the 4800 series offers a lot of value), and it will be very interesting to see what happens going forward.
He isn't a focus group member. I often believe that he is part of an AMD campaign for reverse viral marketing though. Makes more sense to me as he consistently goes overboard in an obvious fashion. So, the more you "dislike" him and his posts, the more you may associate Nvidia with him. Might be reverse psychological warfare.Originally posted by: Astrallite
Is Wreckage an Nvidia Focus Group member or is he just happy to be here?
Just like what you're doing right now, RIGHT!?Originally posted by: Keysplayr
He isn't a focus group member. I often believe that he is part of an AMD campaign for reverse viral marketing though. Makes more sense to me as he consistently goes overboard in an obvious fashion. So, the more you "dislike" him and his posts, the more you may associate Nvidia with him. Might be reverse psychological warfare.Originally posted by: Astrallite
Is Wreckage an Nvidia Focus Group member or is he just happy to be here?![]()
What am I doing now?Originally posted by: cusideabelincoln
Just like what you're doing right now, RIGHT!?Originally posted by: Keysplayr
He isn't a focus group member. I often believe that he is part of an AMD campaign for reverse viral marketing though. Makes more sense to me as he consistently goes overboard in an obvious fashion. So, the more you "dislike" him and his posts, the more you may associate Nvidia with him. Might be reverse psychological warfare.Originally posted by: Astrallite
Is Wreckage an Nvidia Focus Group member or is he just happy to be here?![]()
Icee hot judith hair.
Seconded. Ignore from this point on?Originally posted by: HurleyBird
Rule 1:
Don't feed the troll.
Not matter how stupid his comment, or how brilliant your response to him, Wreckage wins the moment you decide to reply to one of his inflammatory posts.
Ignoring a troll has never caused a thread to derail, guys.
Except what could a CPU calculate that would cause a rendering issue? So it really isnt the same now is it?Originally posted by: SSChevy2001
The Nvidia card is just doing PhysX calculations, not rendering.Originally posted by: Genx87
I can think of support issues being a real pita. Especially trying to fix an issue with a competitors card that doesnt support PhysX.Originally posted by: Vertibird
If PhysX is so great why don't they let customers mix cards on mobos? (ie, we buy ATI cards for the superior gaming value, but then mix it with a Nvidia card for Physx)Originally posted by: Wreckage
Not really. NVIDIA was faster in games than ATI, in fact the GTX295 is still the fastest card available. The HD4xxx series was always behind.Originally posted by: SlowSpyder
AMD GPU's are smaller than Nvidia GPU's but performance is similar.
Are Nvidia GPU's really more aimed towards GPGPU?
The extra GPGPU capability is just icing on top of the gaming cake. Look how well Batman AA plays when PhysX is enabled.
With many games playable on even mid range cards, you need to offer your customers something more. I think this is why NVIDIA outsells ATI 2 to 1.
Or maybe Nvidia doesn't want us to use Lucid hydra? But why would they care since they don't even want to make chipsets anymore?
That's like AMD saying well we don't support Nvidia GPUs with our CPUs, because it might cause problems.
Funny how it works just fine once the limitations are removed by a patch.
http://www.youtube.com/watch?v=Fgp1mYRYLS0
Thread starter | Similar threads | Forum | Replies | Date |
---|---|---|---|---|
I | Question RX6800 or RTX3070 for new build (Hogwarts Legacy discussion) | Graphics Cards | 75 | |
![]() |
Question HWUB revisits the RX 5700 XT in 2022 (versus the 2060 Super and RTX 3060) in newer games. | Graphics Cards | 65 |