• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Nvidia's Future GTX 580 Graphics Card Gets Pictured (Rumours)

Page 21 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Programmable shaders refers to the GeForce 3 (and if you like, the register combiners in GeForce256/2).
Those were programmable, but NOT in the GPGPU sense. You could NOT do Cuda or PhysX on them. That wasn't until many years later.
But I already said that earlier.
I also already explained why PhysX and Cuda are not the same thing.
Stop trolling.

If you did I missed it. PhysX could work on CPUs and other things (even ATI GPUs probably) so I give you that it's not necessarily in the same credit line as CUDA. Stop calling other people's trolls just because of your occasional ambiguity.
 
I have no idea what you're getting at.

The CUDA parts are what make their cards so big and inefficient in a die size/perf, heat/perf, and temps/perf sense. I say its one of their biggest achievements because it really is, however at the wrong place due to the fact they put it in a gaming card.
 
Directx11 was shown about the same time as CUDA, and was likely in development for a while.

Directx 10 was earlier than CUDA and probably provided the impetus for unified shaders to eventually fully programable GPGPU shaders.

Also, DX10 was going to have tessellation but was changed. Seeing as ATI had DX10 tessellators, but Nvidia didn't, it seems somewhat obvious who decided not to advance that.

Also, older nvidia chips only support older cuda revisions.
 
If you did I missed it.

It was in response to you, and you even responded to that post:
http://forums.anandtech.com/showpost.php?p=30726651&postcount=459
So I don't think you missed it.

PhysX could work on CPUs and other things (even ATI GPUs probably) so I give you that it's not necessarily in the same credit line as CUDA.

That's the thing with software. In theory most things *could* work. But if they don't work in practice, what is that worth, really?
I mean, even if Bullet or some other physics APU would work on AMD's hardware through OpenCL, then that still is not really AMD's achievement. Neither Bullet nor OpenCL are AMD's technology.
Cuda on the other hand is. And while nVidia acquired PhysX, it is none other than nVidia that ported it to their GPU architecture.
So both Cuda and GPU PhysX are developed by nVidia. Innovated, if you will.

Stop calling other people's trolls just because of your occasional ambiguity.

No, I'm calling you a troll because you basically bring up the same issues twice. First you agreed with my explanation, by saying 'fair enough', then you bring it back up again, a few posts later. Hence: troll.


Personal attacks are not acceptable.

Re: "I'm calling you a troll"

Labeling a poster anything that is negative is not acceptable. You are allowed to ask if a particular post was intended to be interpreted as a trolling post (affords the original poster the opportunity to clarify on an otherwise misinterpreted post) but we do not allow folks to label the poster themselves as a troll, nor do we allow the labeling of posts as "troll posts".

Moderator Idontcare
 
Last edited by a moderator:
The CUDA parts are what make their cards so big and inefficient in a die size/perf, heat/perf, and temps/perf sense. I say its one of their biggest achievements because it really is, however at the wrong place due to the fact they put it in a gaming card.

I think that's a load of nonsense.
nVidia completely dominated the DX10 market for years, with their Cuda-enabled GPUs.
Just because they aren't as competitive at the moment doesn't mean that you can just blame it all on Cuda. That's too easy.
Also it's a HUGE understatement if I say I disagree that Cuda is in the wrong place on a gaming card. But I won't use the actual words that I'm thinking of...
 
It was in response to you, and you even responded to that post:
http://forums.anandtech.com/showpost.php?p=30726651&postcount=459
So I don't think you missed it.

That's the thing with software. In theory most things *could* work. But if they don't work in practice, what is that worth, really?
I mean, even if Bullet or some other physics APU would work on AMD's hardware through OpenCL, then that still is not really AMD's achievement. Neither Bullet nor OpenCL are AMD's technology.
Cuda on the other hand is. And while nVidia acquired PhysX, it is none other than nVidia that ported it to their GPU architecture.
So both Cuda and GPU PhysX are developed by nVidia. Innovated, if you will.

No, I'm calling you a troll because you basically bring up the same issues twice. First you agreed with my explanation, by saying 'fair enough', then you bring it back up again, a few posts later. Hence: troll.

I was saying fair enough to the first point and as for the second point, even you said it was debatable, re: PhysX vs CUDA and if that merited two credits instead of one. So I said fair enough because you admitted it was debatable (and to the third point about evolutionary advances). So--sorry for *my* ambiguity, when I said "fair enough."

As for triple counting, I said other people might have been upset at what they see as triple counting. As for me, I accept your first point but even so I'm still unconvinced CUDA and PhysX merit 2 credits, maybe 1.5. 😉
 
Last edited:
GPU PhysX are in no way an innovation. Though this is probably a difference in our definitions. For me, innovation would be something completely new, if AMD or nvidia use optical computing, complete x86 emulation with reasonable speeds, modular GPU's, etc. Anything else, Eyefinity, CUDA, PhysX, is just an evolution to me. Nothing wrong with evolution as most innovation requires lengthy evolution to be consumer ready.

Why are we even discussing which companies innovated? The 580 and 6970 look to be good competitors so we can see what adjustments have been made under the hood.

Also, isn't calling someone a troll, trolling? Just warn him and move on.

Finally, what adjustments have been made to the actual shaders/core of the 580? Just faster clocks, slightly more shaders, better thermals?

Edit: G80 doesn't have as much cuda features or performance as newer gpu's which have much more GPGPU elements. The new GPGPU elements have supposedly increased die size, complexity, and decreased performance per transistor.
 
Last edited:
Directx11 was shown about the same time as CUDA, and was likely in development for a while.

No, Cuda was shown way before that.
DirectX 11 wasn't finalized until Cuda was long on the market.

Directx 10 was earlier than CUDA and probably provided the impetus for unified shaders to eventually fully programable GPGPU shaders.

I don't think you understand how DirectX works:
Microsoft and the IHVs get together to decide on the standard.
nVidia probably had a huge hand in the DX10 standard. The DX10 standard has G80 written all over it (except for GPGPU, probably because AMD couldn't provide it in that timeframe, and voted the technology out).
It was the other way around with DX9, that one has ATi's R300 written all over it.

Also, DX10 was going to have tessellation but was changed. Seeing as ATI had DX10 tessellators, but Nvidia didn't, it seems somewhat obvious who decided not to advance that.

I wonder how much of that is just nomenclature. They could just have called the geometry shader the tessellator unit. After all, it can perform tessellation.
I wonder why they chose not to name it that, and then introduce that name in DX11.
Probably because they knew it wasn't going to be any good at tessellation... and as I said before, AMD fell for the same trap in DX11: triangle throughput.

Also, older nvidia chips only support older cuda revisions.

So? Current OpenCL and DirectCompute standards are barely on par with the first Cuda revision. The latest version of Cuda can do a LOT more.
 
Cuda was shown a year before, my mistake. Also, DX10 had a lot of ATI because ATI designed the first psuedo dx10 card for Microsoft already. They probably had more pull because of that.

Nice job not actually commenting on how nvidia had no such tessellator. Your wonderings are interesting, but just that, wonderings.

Cuda is useful for some people, but not as much for gamers.

Lets get this back on topic. What improvements does 580 have or is rumored to have?
 
As for triple counting, I said other people might have been upset at what they see as triple counting.

But you said the exact same thing in the post that I was replying to:
"Numbers 2, 4, and 5 are all aspects of the same thing."
I see triple counting there.

As for me, I accept your first point but even so I'm still unconvinced CUDA and PhysX merit 2 credits, maybe 1.5. 😉

As I say, it's debatable.
It'd be a ridiculous debate from where I'm standing, but still...
I mean, that's like saying that internet search engines weren't innovative because the internet already existed.
You have hardware technology, and then you have software that builds on that, enabling completely new things. The way you go on, no software could ever be innovative, because it always requires hardware to run on.
Clearly I hope you can see how ridiculous that notion is. Or at least, you can see how I would find that ridiculous, being a software engineer myself?
 
I think that's a load of nonsense.
nVidia completely dominated the DX10 market for years, with their Cuda-enabled GPUs.
Just because they aren't as competitive at the moment doesn't mean that you can just blame it all on Cuda. That's too easy.
Also it's a HUGE understatement if I say I disagree that Cuda is in the wrong place on a gaming card. But I won't use the actual words that I'm thinking of...

Your calling other people a troll? Anyway, woo hoo, Nvidia did dominate the DX10 market, and some spots before that. AMD dominated, and is still dominating the DX11 market. Why you might ask? GF100 is a whole lot of CUDA, beefy tesselators, and the such. That made it a massive chip. Larger chips=higher production costs, lower yields, and more problems, and a 6 month delay really showed the problems. Why was GF104 so sucessful? Because it cut down on those things, and was more of a gaming card. Much more. Nvidia took the best of the workstation world, and the best of the gaming world, and made, well, a hot mess. If you disagree, grab a seat, heres a cookie, but lose the green shades. I think it was pretty obvious GF100 and Fermi werent sucessful by almost any standard. I indirectly blame CUDA, or Nvidia going for another big die home run everytime. It worked with G80, but it sure wont work everytime.
 
But you said the exact same thing in the post that I was replying to:
"Numbers 2, 4, and 5 are all aspects of the same thing."
I see triple counting there.

Yeah but you then explained what you meant about 2 vs. 4, which I agree with.

But I don't know if other people agree with it--I can't speak for others--I also saw other people pointing out your multiple-counting. So I said that maybe other people don't like your 5-credit menu because they see some redundancies there like I had.

P.S. We are actually a lot closer to full agreement than disagreement, 4.5+ credits vs. 5.0 credits. (I will have to think about it some more, hence the "+" after 4.5.)

And that no matter how you slice it, NV has historically been the more innovative company. Which was what got us down this fork in the road in the first place--someone talking about relative innovation rates or something.
 
Last edited:
Nice job not actually commenting on how nvidia had no such tessellator.

They had tessellation support in the GeForce3, DX8's RT-patches.
And obviously they supported the geometry shader.
Given the relevance (or lack thereof) of ATi's geometry shader, I don't see why it's even worth mentioning.

Cuda is useful for some people, but not as much for gamers.

I would disagree... More and more games use PhysX and DirectCompute, both enabled by Cuda. Anyone who cannot see this is just... well, nevermind.
 
Yeah but you then explained what you meant about 2 vs. 4, which I agree with.

You can't speak for others, perhaps others agreed with it as well (nobody brought the issue up again, anyway).

But I don't know if other people agree with it--I can't speak for others--I also saw other people pointing out your multiple-counting.

You only have to mention it once. I think it's pretty obvious that if you bring such an objection forward, that others may have that same objection.

So I said that maybe other people don't like your 5-credit menu because they see some redundancies there like I had.

Well, I already explained that you only see redundancies if you don't have a clue.
 
13 Games use GPU PhysX and most barely use it (none use physics as well as crysis or red faction). Directcompute, while based off of cuda, is not cuda. So CUDA is not useful for me at all really. If GPU PhysX appears in Crysis 2 and really contributes to gameplay, then CUDA will have some use.

Can we go back to the 580 discussion? You keep ignoring that part of my posts. Also you have been quite impolite with most of the other forum posters and think that if you argued a bit more politely, you would have a lot more success in having good debates.

For the third time, what improvements have been made to the 580?
 
Why you might ask? GF100 is a whole lot of CUDA, beefy tesselators, and the such. That made it a massive chip. Larger chips=higher production costs, lower yields, and more problems, and a 6 month delay really showed the problems. Why was GF104 so sucessful? Because it cut down on those things, and was more of a gaming card. Much more. Nvidia took the best of the workstation world, and the best of the gaming world, and made, well, a hot mess. If you disagree, grab a seat, heres a cookie, but lose the green shades. I think it was pretty obvious GF100 and Fermi werent sucessful by almost any standard. I indirectly blame CUDA, or Nvidia going for another big die home run everytime. It worked with G80, but it sure wont work everytime.

As I say, that's too easy.
AMD's Radeon 2900 suffered from pretty much the exact same problems as GF100. Too big, too hot, not enough performance etc.
Thing is, the Radeon had nothing even remotely resembling Cuda. It was designed as a straight up graphics card.
It just wasn't a very successful design.
nVidia has made successful GPU designs with Cuda in the past, and I'm certain that they will be making successful GPU designs with Cuda in the future.

Also, GF104 didn't cut out all that much, it still supports a whole lot more GPGPU/Cuda and tessellation than any of AMD's offerings. So your example is a tad flawed there.
 
We don't know officially what Nvidia improved on the gtx 580, they are claiming more tessellation performance, fastest dx11 gpu on Planet, and AMD is waving the white flag on tessellation, something about coding to a lower denominator, lol
 
We don't know officially what Nvidia improved on the gtx 580, they are claiming more tessellation performance, fastest dx11 gpu on Planet, and AMD is waving the white flag on tessellation, something about coding to a lower denominator, lol

I suspect better tessellation is only due to higher clocks. In other words, I thought the GTX580 was doing better in Heaven than the GTX480 due to removing bottlenecks elsewhere (e.g., more TMUs), rather than revamping their Polymorph setup? To me that sounds more likely--that NV would spend its efforts on removing bottlenecks rather than improving its already sizeable tessellation lead.
 
So, I'm curious now. What's your AMD/ATi list look like?

Well, I think we can give them tessellation.
I think they were also the first with hierarchical z/stencil buffering.
But some things are a bit difficult to say...
While clearly ATi was first with SM2.0, and probably was a large factor in the development of this standard... is that really an innovation, or just building on the groundwork of programmable shaders that nVidia laid? Because if we count such technologies, then I think I can make nVidia's list a whole lot longer as well.
Same with 3Dc... aside from it having disappeared into obscurity, it was S3 that first introduced texture compression, and ATi's 3Dc is just a simple variation on the same theme.
I can think of other things, such as the first decent MSAA/AF implementation... geometry instancing... FP texture filtering etc.
But are they really innovations, or just improvements of concepts that already existed on other hardware, or already standardized before?

The things I listed for nVidia are strictly nVidia's own technology. And aside from PhysX, they were all merged into the DirectX standard at a later stage, not at the moment that nVidia first offered the technology in their own proprietary form.
 
I suspect better tessellation is only due to higher clocks. In other words, I thought the GTX580 was doing better in Heaven than the GTX480 due to removing bottlenecks elsewhere (e.g., more TMUs), rather than revamping their Polymorph setup? To me that sounds more likely--that NV would spend its efforts on removing bottlenecks rather than improving its already sizeable tessellation lead.

Well, another factor is that the PolyMorph units were designed with 16 triangle units, but only 15 have been enabled in GTX480.
So they could get a tessellation boost just by having a fully enabled 16-triangle PolyMorph setup.
 
Pretty sure 6870 is faster than 460 in Unigine and such. Though Fermi has a lot of clock scaling, so I am excited for the 7XXX versus Kepler or Maxwell (whichever one is first).

Can we go back to the main thread?
 
Well, I think we can give them tessellation.
I think they were also the first with hierarchical z/stencil buffering.
But some things are a bit difficult to say...
While clearly ATi was first with SM2.0, and probably was a large factor in the development of this standard... is that really an innovation, or just building on the groundwork of programmable shaders that nVidia laid? Because if we count such technologies, then I think I can make nVidia's list a whole lot longer as well.
Same with 3Dc... aside from it having disappeared into obscurity, it was S3 that first introduced texture compression, and ATi's 3Dc is just a simple variation on the same theme.
I can think of other things, such as the first decent MSAA/AF implementation... geometry instancing... FP texture filtering etc.
But are they really innovations, or just improvements of concepts that already existed on other hardware, or already standardized before?

The things I listed for nVidia are strictly nVidia's own technology. And aside from PhysX, they were all merged into the DirectX standard at a later stage, not at the moment that nVidia first offered the technology in their own proprietary form.

Thanks, I need to go eat now, but I just thought of something else to add to your list of five: first full implementation of 3D in games.
 
As I say, that's too easy.
AMD's Radeon 2900 suffered from pretty much the exact same problems as GF100. Too big, too hot, not enough performance etc.
Thing is, the Radeon had nothing even remotely resembling Cuda. It was designed as a straight up graphics card.
It just wasn't a very successful design.
nVidia has made successful GPU designs with Cuda in the past, and I'm certain that they will be making successful GPU designs with Cuda in the future.

Also, GF104 didn't cut out all that much, it still supports a whole lot more GPGPU/Cuda and tessellation than any of AMD's offerings. So your example is a tad flawed there.

Im sorry for taking the easy route out of things? I dont understand what you mean when you say its too easy. And your right, Nvidia did keep a lot of CUDA parts in GF104...its also the size of a 5870 :hmm: You honestly think CUDA adds nothing to size at all...? You honestly think Nvidia makes big cards for kicks, giggles, and wasted space...? And id say AMD doesnt do much APP in consumer cards, as they have workstation cards. Nvidia is going to ruin both their markets if they keep this up IMO. Why buy a Quadro FX1700 over a 480 if you want workstation stuff and the 480 has CUDA? Ive seen those CS5 comparison charts with CUDA, fantastic stuff. Then on the flip side, if you dont want CUDA, then why buy a 480 over a 5870? I personally just think they are trying to do too much in one card. AMD cards might be 1 trick ponies, but id rather a good 1 trick pony if it does what i want than a 3 trick pony if it does 2 things i dont and the 1 thing worse. Just my thoughts.
 
Back
Top