Nvidia GPUs soon a fading memory?

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I disagree that AMD is out of the race, my point was that they were behind, but not out of the race.
I disagree with those that say AMD GPGPU is completely the same as the competition and not behind, I disagree with those that say AMD GPGPU is out of the race and the company is DOOMED (TM).
It is behind, but it can catch up if it puts its mind to it.

As for CUDA... I would say a large portion of the fermi sales are due to HPC, and they should, as it is a card that sacrifices gaming performance (or in this case, efficiency and as a result noise) for better HPC performance. I am unaware of the exact figures though and I don't proclaim to know.

I don't see any science projects wanting anything to do with AMD GPUs as they are right now (if they improve on them they could get into that race)... Intel / AMD cpus I can see, for specific types of computations, but both outclassed by nvidia GPUs for certain (many) types of computations.

EDIT: It dawned on me that I should clarify the definition of "out of the race"...
Is AMD "out of the race" in that it cannot catch up to nvidia? no, not at all.
Is AMD "out of the race" in that its current offers are completely outclassed and nobody in their right mind would chose it to make a new GPGPU project? yes.

Right now there is absolutely no reason to use an AMD GPGPU, for anyone. If you need GPGPU you use nVidia (exception being the folding at home thing... but that is a completely different type of animal).
AMD is choosing to sit on the bench on this one, it can still join in if it so chooses.
 
Last edited:

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
I disagree that AMD is out of the race, my point was that they were behind, but not out of the race.
I disagree with those that say AMD GPGPU is completely the same as the competition and not behind, I disagree with those that say AMD GPGPU is out of the race and the company is DOOMED (TM).
It is behind, but it can catch up if it puts its mind to it.

So 100% agreement.

The only thing I wish to add is that, IMO, the race has barely started.

EDIT:

Right now there is absolutely no reason to use an AMD GPGPU, for anyone. If you need GPGPU you use nVidia (exception being the folding at home thing... but that is a completely different type of animal).

I assume you are talking for us everyday consumers?

The question I have is:

"What applications should I be using for my everyday usage that uses CUDA?"
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
So 100% agreement.

The only thing I wish to add is that, IMO, the race has barely started.

good point. nVidia currently is trying to build a market where none existed, they are starting to run before the umpire even gave the signal to start. AMD is doing stretches in the sidelines and waiting for the crowds to be seated first (wow am I stretching those analogies :p)
 

Scali

Banned
Dec 3, 2004
2,495
0
0
AMD is doomed because of a number of reasons:
1) Their original Stream/CTM failed, so AMD never got a foot in the door with any developers on the first try, unlike Cuda.
2) AMD started over, focusing on OpenCL and DirectCompute this time around... but OpenCL and DirectCompute are not the same as Cuda. They are more like programmable shaders in graphics, where C for Cuda is a very natural extension of regular applications, and now with C++, it is even more powerful.
OpenCL and DirectCompute don't support objects and various other features of C++ for Cuda, making the learning curve from CPU to GPU programming considerably more steep.
3) AMD doesn't offer OpenCL to end-users.
4) AMD doesn't have a strong team of software developers like nVidia and Intel.

Especially 4) is really going to hurt. In theory AMD could recover... but that requires they start investing LOTS in their software department. They need more developers, and more importantly: more qualified developers.
Intel and nVidia have been developing compilers and Visual Studio addons for years (the new Nexus GPGPU stuff for Fermi in Visual Studio is another great example), and employ a number of 'rock star' developers in the industry. They know how to get things done, and get them done well.

Intel and nVidia are leaders. nVidia has led the industry with a lot of OpenGL extensions, Cg/HLSL, and then with Cuda and PhysX. Intel doesn't even need explanation.
AMD is not a leader, AMD is a follower.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
AMD is doomed because of a number of reasons:
1) Their original Stream/CTM failed, so AMD never got a foot in the door with any developers on the first try, unlike Cuda.
2) AMD started over, focusing on OpenCL and DirectCompute this time around... but OpenCL and DirectCompute are not the same as Cuda. They are more like programmable shaders in graphics, where C for Cuda is a very natural extension of regular applications, and now with C++, it is even more powerful.
OpenCL and DirectCompute don't support objects and various other features of C++ for Cuda, making the learning curve from CPU to GPU programming considerably more steep.
3) AMD doesn't offer OpenCL to end-users.
4) AMD doesn't have a strong team of software developers like nVidia and Intel.

Especially 4) is really going to hurt. In theory AMD could recover... but that requires they start investing LOTS in their software department. They need more developers, and more importantly: more qualified developers.
Intel and nVidia have been developing compilers and Visual Studio addons for years (the new Nexus GPGPU stuff for Fermi in Visual Studio is another great example), and employ a number of 'rock star' developers in the industry. They know how to get things done, and get them done well.

Intel and nVidia are leaders. nVidia has led the industry with a lot of OpenGL extensions, Cg/HLSL, and then with Cuda and PhysX. Intel doesn't even need explanation.
AMD is not a leader, AMD is a follower.

There we go again. Just as an example x86-64bits?
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
No actually... I bought an Athlon XP slightly before, and skipped the Athlon64/Pentium4 era altogether.



No, they are NOT doing anything. How many applications can you run on an AMD GPU? 0 (let's not count the AMD-sponsored Folding@Home client as an 'application').
For nVidia there are various GPU-accelerated video encoders, there's various Adobe products, there's PhysX, and soon we will also have Cuda-accelerated virus scanners.
Actual applications that people can actually use, that will actually improve their productivity or experience.
AMD has none of those.

I don't know the big talk, but Cyberlink MediaShow Expresso works like a champ with my card and it uses Stream to use the GPU to accelerate video, only works on HD 4600 series or higher. The same goes with the AVIVO Converter, nVidia doesn't offer a In-House video converter AFAIK, plus the other applications that GAIAHUNTER posted.

Intel and nVidia are leaders. nVidia has led the industry with a lot of OpenGL extensions, Cg/HLSL, and then with Cuda and PhysX. Intel doesn't even need explanation.
AMD is not a leader, AMD is a follower.

AMD was the one who invented the 64-Bit extensions of the x86 architecture which Intel uses. AMD was the first x86 processor with an IMC. AMD had the first DX9 card, AMD had the first DX10.1 card, AMD had the first DX11 card, AMD incorporated lots of stuff in the DX11 like gather4, Tessellation, 3dc known as BC5 and BC3.

What features nVidia incorporated on the DX API?? nVidia always has been a leader in the OpenGL, period thanks to their robust Professional Workstation Market. AMD may not be the best performer currently, but AMD is 66 times smaller than Intel and its competing with them, plus competing with nVidia, I think is doing a great job, specially in the videocard market where they're more competitive than in the CPU market.

PhysX is a gimmick which nVidia condemned to death locking it out of the 99% of the DX11 market which AMD owns. CUDA is a nice piece of software that nVidia engineered to compete against Intel since they don't have a x86 CPU. Don't glorify Intel a lot, they want nVidia, your favorite brand dead and thaty's why they're off from the Intel Chipset platform.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I don't know the big talk, but Cyberlink MediaShow Expresso works like a champ with my card and it uses Stream to use the GPU to accelerate video, only works on HD 4600 series or higher. The same goes with the AVIVO Converter, nVidia doesn't offer a In-House video converter AFAIK, plus the other applications that GAIAHUNTER posted.

The Avivo converter doesn't even work on my 5000-series card!!!
That's how good AMD's software support is.

Don't you get it? I'm talking about SOFTWARE.
AMD can make hardware, but they cannot push it because they are unable to deliver good tools for developers.

DX depends on Microsoft's SDK. But with OpenGL/OpenCL it's a different story. nVidia just has a lot more to offer for developers.

PhysX is a gimmick which nVidia condemned to death locking it out of the 99% of the DX11 market which AMD owns. CUDA is a nice piece of software that nVidia engineered to compete against Intel since they don't have a x86 CPU. Don't glorify Intel a lot, they want nVidia, your favorite brand dead and thaty's why they're off from the Intel Chipset platform.

nVidia my favourite brand? Heh, right.
That's why I bought a Radeon?
I don't have favourite brands. I only look at products, not brands. If everyone did that, we wouldn't have endless forum discussions with fanboys who can't get to grips with reality.
Yea, I have a Radeon, and yea, in some ways it REALLY sucks. That's reality.
But I got the Radeon because it sucked less for me at the time than getting an nVidia DX10 55 nm part. That's how it goes.
 
Last edited:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
The Avivo converter doesn't even work on my 5000-series card!!!
That's how good AMD's software support is.

Don't you get it? I'm talking about SOFTWARE.
AMD can make hardware, but they cannot push it because they are unable to deliver good tools for developers.

DX depends on Microsoft's SDK. But with OpenGL/OpenCL it's a different story. nVidia just has a lot more to offer for developers.

In that case, then you should retract and state that AMD is a follower in terms of software. :hmm:
 

Scali

Banned
Dec 3, 2004
2,495
0
0
In that case, then you should retract and state that AMD is a follower in terms of software. :hmm:

I think the context of my post was clear enough. I specifically pointed out their weakness in the software department (points 1 through 4).
Aside from that, I consider AMD mostly a follower in the hardware department aswell.
You guys can name the exceptions, but that doesn't change them from being the exceptions to the rule. But that's beside the point here.
Especially ATi has mostly been plagued by the fact that they cannot deliver the drivers.
I had a Radeon 8500, which was a great piece of hardware at the time, but it took years until the drivers were mature enough to actually get close to GeForce4 performance.
The Radeon 9700 aswell, was a nice card, but Doom 3 quickly showed its weak spot: the OpenGL drivers were absolutely horrible.
ATi 'solved' it by optimizing the drivers for the Doom 3 engine, but other OpenGL software still often ran into bugs or performance issues.
And that's just Windows... things get ever more bleak if you start looking at OS X or linux for example.
 
Last edited:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
The Radeon 9700 aswell, was a nice card, but Doom 3 quickly showed its weak spot: the OpenGL drivers were absolutely horrible.
ATi 'solved' it by optimizing the drivers for the Doom 3 engine, but other OpenGL software still often ran into bugs or performance issues.
And that's just Windows... things get ever more bleak if you start looking at OS X or linux for example.

OpenGL performance had little to do with the Doom 3 issue, the main reason of Doom 3 being slower was because it used a Look Up table to calculate LOD. It was an optimization great for nVidia hardware, but not for ATi hardware, Doom 3 was developed in nVidia hardware. For ATi hardware it was faster using computational shaders. The same optimization works great with any Doom 3 engine aka ID Tech 2, 3, 4 based game. I don't see nVidia smoking ATi in OpenGL benchmarks, if it does, please enlight us. At least in Linux, I had some minor issues with an old X800XT with screen lock ups, but in the end, the card had bad VRAM. Moved to an HD 2600 PRO AGP, and Kubuntu 10.04 is working like a champ with no issues. I do condemn the lack of costumization in the Linux CCC.

http://www.anandtech.com/show/1488/6
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
OpenGL performance had little to do with the Doom 3 issue, the main reason of Doom 3 being slower was because it used a Look Up table to calculate LOD. It was an optimization great for nVidia hardware, but not for ATi hardware, Doom 3 was developed in nVidia hardware. For ATi hardware it was faster using computational shaders. The same optimization works great with any Doom 3 engine aka ID Tech 2, 3, 4 based game. I don't see nVidia smoking ATi in OpenGL benchmarks, if it does, please enlight us. At least in Linux, I had some minor issues with an old X800XT with screen lock ups, but in the end, the card had bad VRAM. Moved to an HD 2600 PRO AGP, and Kubuntu 10.04 is working like a champ with no issues. I do condemn the lack of costumization in the Linux CCC.

http://www.anandtech.com/show/1488/6

That's just ONE of the optimizations they've done (and it's not LOD, they do pow(16) with a LUT).
That's not what I'm talking about. If you look at the driver updates, you see quite significant improvements in Doom3, the introduction of shader replacements being but one of them.

On my Radeon 5770, my OpenGL BHM sample application runs at 1000 fps in Ubuntu 10.04, and at about 7500 fps in Windows XP. Not an issue?
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
There we go again. Just as an example x86-64bits?


That was an easy job, just like when Intel wenr from x86-16 to x86-32 (286 -> 386) and left us stuck with x86 for years to come, which IMHO isn't a big thing or a good thing, too much "bagage" in x86 today.

I would rather have seen eg. IA-64 replace x86-32
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
That was an easy job, just like when Intel wenr from x86-16 to x86-32 (286 -> 386) and left us stuck with x86 for years to come, which IMHO isn't a big thing or a good thing, too much "bagage" in x86 today.

I would rather have seen eg. IA-64 replace x86-32

Or ARM or RISC stuff like PowerPC, SPARC, etc

Just saw this piece of news:
http://www.brightsideofnews.com/news/2010/5/18/nvidia-bags-ibm-to-ship-tesla-for-datacenters.aspx
IBM building datacenter servers with nVidia Tesla cards.
Probably not entirely a coincidence that this is again nVidia's Cuda technology being adopted on a large scale, and not AMD.

Funny enough, most super computers have AMD Opteron, not Tesla. Server market belongs to AMD and Intel. Tesla has a future, but it has also a lot of limitations like lack of flexibility that makes it unsuitable for general purpose code, I don't think that it can run an Operating System can it? But its stepping into the right direction.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
Funny enough, most super computers have AMD Opteron, not Tesla.

No shit, with Tesla only existing for about 3 years. CPUs are the traditional parts in super computers. Doesn't make other types of processors any less valid (how about the clusters people built from Mac G4s or PS3s for example?).
People are just starting to discover the possibilities of GPGPU. Point I'm making is that it's Tesla and Cuda technology that drives GPGPU, not AMD/Stream.
AMD is on its way out of the HPC and server market... with companies like Oracle/Sun announcing that they are dropping AMD processors.
It's Intel with Nehalem on one side, and nVidia with Tesla on the other.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
No shit, with Tesla only existing for about 3 years. CPUs are the traditional parts in super computers. Doesn't make other types of processors any less valid (how about the clusters people built from Mac G4s or PS3s for example?).
People are just starting to discover the possibilities of GPGPU. Point I'm making is that it's Tesla and Cuda technology that drives GPGPU, not AMD/Stream.
AMD is on its way out of the HPC and server market... with companies like Oracle/Sun announcing that they are dropping AMD processors.
It's Intel with Nehalem on one side, and nVidia with Tesla on the other.

Doubtfull, AMD is far more competitive in the server market than it is in the workstation market. Their multi CPU platform works like a champ, and cost much less than the similar Nehalem, plus their low power consumption line is a plus. Oracle bought Sun, and Oracle owns the SPARC architecture, so don't make sense having two different architectures for the same market, specially when the x86 server market is so crowded. Close but no cigar....

PS3 clusters were good only on some parallel tasks that doesn't require branchy code or thread awareness/cache coherency. Mac G4? LOLL, long time I haven't seen those, plus I don't think that IBM is pushing the PowerPC that far like they used to do.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
What about Chinas cluster of 4870 x2s?

Isnt it harder for PS3s to be used in a cluster now that you can't run linux on them?
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
What about Chinas cluster of 4870 x2s?

Isnt it harder for PS3s to be used in a cluster now that you can't run linux on them?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Doubtfull, AMD is far more competitive in the server market than it is in the workstation market. Their multi CPU platform works like a champ, and cost much less than the similar Nehalem, plus their low power consumption line is a plus.

Not at all...
A big strength of Nehalem is that a single CPU delivers a lot of performance.
The result is that a 2P Nehalem system can rival the performance of a 4P Opteron system.
And 4P systems are far more expensive.
Intel also comes out on top with power consumption this way.

Oracle bought Sun, and Oracle owns the SPARC architecture, so don't make sense having two different architectures for the same market, specially when the x86 server market is so crowded. Close but no cigar....

Except the SPARC architecture is in a higher price segment than x86 servers. Oracle continues with x86 servers, but only based on Intel CPUs:
http://ideasint.blogs.com/ideasinsi...ntentions-for-the-sun-hardware-portfolio.html
"The company will bring to market new Sun x86 servers using the Intel processor architecture, and has no plans to develop any new servers with AMD processors, including Magny Cours. In fact Mr Sigler said that the company is in the process of EOL’ing the current family of AMD x86 servers."
Close but no cigar indeed.
 
Last edited:

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
Why don't you just put Scali on your ignore list and be done with it? Then maybe this thread can finally die and be put out of its misery.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
AMD is doomed because of a number of reasons:
1) Their original Stream/CTM failed, so AMD never got a foot in the door with any developers on the first try, unlike Cuda.
2) AMD started over, focusing on OpenCL and DirectCompute this time around... but OpenCL and DirectCompute are not the same as Cuda. They are more like programmable shaders in graphics, where C for Cuda is a very natural extension of regular applications, and now with C++, it is even more powerful.
OpenCL and DirectCompute don't support objects and various other features of C++ for Cuda, making the learning curve from CPU to GPU programming considerably more steep.
3) AMD doesn't offer OpenCL to end-users.
4) AMD doesn't have a strong team of software developers like nVidia and Intel.

Especially 4) is really going to hurt. In theory AMD could recover... but that requires they start investing LOTS in their software department. They need more developers, and more importantly: more qualified developers.
Intel and nVidia have been developing compilers and Visual Studio addons for years (the new Nexus GPGPU stuff for Fermi in Visual Studio is another great example), and employ a number of 'rock star' developers in the industry. They know how to get things done, and get them done well.

Intel and nVidia are leaders. nVidia has led the industry with a lot of OpenGL extensions, Cg/HLSL, and then with Cuda and PhysX. Intel doesn't even need explanation.
AMD is not a leader, AMD is a follower.

you make some very legitimate points. I think you are right. The chances of AMD pulling out of it in the long term is low, at least as far as CPU and GPGPU goes... they could continue to lead in terms of gaming, but how much room is there left for improvement? gaming is already incredibly life like with existing tech, they need gimmicks like 3d and multi monitors to keep on pushing hardware...
They are far behind on CPUs compared to intel, and they are behind on GPGPU compared to nvidia... And they only seem to fall further behind as time goes by, not catch up.
 
Last edited:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
you make some very legitimate points. I think you are right. The chances of AMD pulling out of it in the long term is low, at least as far as CPU and GPGPU goes... they could continue to lead in terms of gaming, but how much room is there left for improvement? gaming is already incredibly life like with existing tech, they need gimmicks like 3d and multi monitors to keep on pushing hardware...
They are far behind on CPUs compared to intel, and they are behind on GPGPU compared to nvidia... And they only seem to fall further behind as time goes by, not catch up.

I doubt that Intel would like to see AMD go away in the x86 market, Being 66 times smaller and quite competitive is good for Intel, I don't think that Intel would like to be split in two by the government for the anti trust crap. GPGPU might have an end in AMD if they keep the same architecture, but since it will be changed, we shall see how good fares the North Island. AMD isn't putting a lot of emphasis with GPGPU alone because unlike nVidia, AMD has a CPU which can be used in conjunction with their GPU for much better flexibility and scalability. CPU for general purpose code and GPU assistance and the GPU crunching massive parallel code.

In the server market, AMD is very competitive compared to Intel, Magny Cours can keep up with Nehalem EX, specially in heavy multi threading, in other scenarios isn't far behind, but is cheaper, who's gonna buy a massive multi core server to run single threaded software or not heavily threaded enough?

The HD 5870 GPGPU performance is twice faster and even more than the HD 4870, which indeed means that they're improving, but I think that the architecture won't last long unless if its reworked. At least its almost competitive as the GTX 285 in terms of GPGPU performance which isn't much slower than the GTX 480 except in some scenarios which the GTX 480 shines incredibly. The same could be said with the ATi architecture which outperforms nVidia cards breaking encryption code or running MilkyWay@Home, but those are rare stances but shows some potential.

No way. I wanna see how long these guys can keep it up.

:awe:
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
There are 3 fields:
1. GPGPU: AMD is way behind [nvidia], it can catch up but its prospects don't look good.
2. GPU: AMD is actually ahead, but the market has reached a dead end with very little room for growth as of right now. They have to resort to gimmicks such as multi monitors or 3d to justify their higher end solutions. We could end up with a similar case to what happened with DDR2, where the market got saturated with "good enough" hardware and nobody was buying new.
3. CPU: AMD is way behind [intel], the only reason they are still in the game is that intel is trying not to overwhelm them because they are afraid of the government. Intel keeps on widening the gap, and further lowering what they give to consumers (aka, more overclocking room per chip, much lower multipliers, higher prices, etc). This is great for intel, terrible for people wanting new, better tech, and a dangerous place for AMD.
Heavy investment on AMD's part would only force intel to release new SKUs that aren't AS handicapped or slash prices a bit, resulting in AMD still charging the same tiny profits.
The cost to improve each generation of CPUs only continues to increase.
And finally, intel also has to compete with its own last gen... at some point intels reaches saturation and must significantly increase the quality of their offers (aka, more CPU for less money) to keep selling, in which case they might accidentally squish AMD.

They are competitive in the server market as you said, heck they are also competitive in the home market, but they are only competitive because intel is letting them maintain their position as a "budget" company... The margins are small, and the long term sustainability of such a position is limited.
 
Last edited:
Status
Not open for further replies.