Cuda's future

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Schmide

Diamond Member
Mar 7, 2002
5,745
1,036
126
Now can you give me a source showing that linux is not dying?

From the same article?

Robert Strohmeyer said:
Linux has clearly asserted itself as a major platform that's here to stay. And of course, passionate open-source proponents will rightly stand by their favorite desktop distributions despite the challenges ahead.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
From the same article?

Why didn't you quote the whole thing?
End of the Road?

It has been a long trek since Linus Torvalds wrote the first Linux kernel as a college project in 1992, and the landscape has shifted considerably along the way. Despite grim prospects on the desktop, Linux has clearly asserted itself as a major platform that's here to stay. And of course, passionate open-source proponents will rightly stand by their favorite desktop distributions despite the challenges ahead.

But at this point in history, it's hard to deny the evidence: With stagnant market growth and inadequate content options compounded by industry inertia, Linux basically has no chance to rival Mac OS X, much less Windows.
 

Schmide

Diamond Member
Mar 7, 2002
5,745
1,036
126
Why didn't you quote the whole thing?

Does that equal dying? Even with the extended quote?

Reductio Ad Absurdum?

Linux never ever got more than a few percent of the home market since its inception, but that doesn't equal dying.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
Does that equal dying? Even with the extended quote?

Reductio Ad Absurdum?

Linux never ever got more than a few percent of the home market since its inception, but that doesn't equal dying.
Okay then, so dying is different from barely alive. At least we can settle on the low representation on an OS that is open source all this years. I am not trying to crash Linux, I am trying to say these stuffs have been there for a long time, but not many are using them.

The good thing about proprietary is liability. Shall DirectX fails, Microsoft is to be blamed. Shall CUDA fails, Nvidia is to be blamed. If they suck, then their creator is to be blamed, and people simply don't use it.

Now as to all proprietary stuffs, there is an obvious down side, and directX is no exception. In fact, it is one of the worst offender. Not only it won't support those who ain't their customer, but also customers that don't buy their new products. Here is a little history of DirectX. I am not the author, just found it on google.
History of DirectX By David Rosen
 

Schmide

Diamond Member
Mar 7, 2002
5,745
1,036
126
Okay then, so dying is different from barely alive. At least we can settle on the low representation on an OS that is open source all this years. I am not trying to crash Linux, I am trying to say these stuffs have been there for a long time, but not many are using them.

Dude it's all freaking relative to the segment you're comparing it to. For the web/cloud it's very competitive and if you look at HPC, Linux dominates with near 90% of the top servers. I don't think you're going to find Direct Compute making inroads into that market any time soon and it's not exactly promising to see CUDA as the platform of choice either.

It's kind of like saying NFL football is dying/barely alive because 90% of the world sees footballs as round.
 
Last edited:

Cogman

Lifer
Sep 19, 2000
10,286
145
106
You forgot to mention Mac CPU. I still remember the days where people use floppy disks, and floppy that contain data from apple is not readable by Microsoft, and vice versa.

Many games run on PC, but not Mac. Where have you been all these years?
That is operating system incompatibility, not hardware incompatibility. There is a big difference that I can't believe you don't really get.

BTW, I said OpenCL, not OpenGL, there is a BIG difference.

Let me put it this way. I have a program written in C++ for windows, I want to port it over to a MAC. Would the process be expensive? Maybe, it depends on how well the underlying windowing system was written and how many native Win32 functions are being called. Would it be impossible? No, not at all.

Now lets say I have software written in CUDA that I want to run on a PC with an ATI card. What would that involve? Complete program rewriting. Not a simple aspect, and this is for the SAME platform (a common thing, you know, that software written for windows be able to run on windows...) That translates into tons of code rewriting and duplication if you want it to be installed on a machine with a non-nVidia card.

This wouldn't be such a problem if CUDA cards had the majority market share but, guess what, they don't. In fact, they are a niche market. That means for a software developer to use CUDA they are eliminating a huge portion of the market from their customer base.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
That is operating system incompatibility, not hardware incompatibility. There is a big difference that I can't believe you don't really get.

BTW, I said OpenCL, not OpenGL, there is a BIG difference.

Let me put it this way. I have a program written in C++ for windows, I want to port it over to a MAC. Would the process be expensive? Maybe, it depends on how well the underlying windowing system was written and how many native Win32 functions are being called. Would it be impossible? No, not at all.

Now lets say I have software written in CUDA that I want to run on a PC with an ATI card. What would that involve? Complete program rewriting. Not a simple aspect, and this is for the SAME platform (a common thing, you know, that software written for windows be able to run on windows...) That translates into tons of code rewriting and duplication if you want it to be installed on a machine with a non-nVidia card.

This wouldn't be such a problem if CUDA cards had the majority market share but, guess what, they don't. In fact, they are a niche market. That means for a software developer to use CUDA they are eliminating a huge portion of the market from their customer base.


^ is my point, I could see developers moveing more and more away from CUDA simply because it ll be easier to do Directcompute where everyone can use it (even nvidia guys).

if your a developer 1x work, vs doing 2x work for no reason? which do you do? so in short cuda might be less and less seen in the future.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
That is operating system incompatibility, not hardware incompatibility. There is a big difference that I can't believe you don't really get.
OS is a software, just like CUDA and DirectX.
BTW, I said OpenCL, not OpenGL, there is a BIG difference.
Both are open standards.


Let me put it this way. I have a program written in C++ for windows, I want to port it over to a MAC. Would the process be expensive? Maybe, it depends on how well the underlying windowing system was written and how many native Win32 functions are being called. Would it be impossible? No, not at all.
C++ is cross-platform, but many packages are platform specific which may or may not have a reason. Depending on what you are porting, it may be easier to start from sketch.

Now lets say I have software written in CUDA that I want to run on a PC with an ATI card. What would that involve? Complete program rewriting.
that is no true. Thing of CUDA is a library. You can spend your time to replace this library with OpenCL libraries. This is no harder than porting something from PC to Mac.

CUDA library doesn't work on ATI cards simply because ATI doesn't use CUDA cores, so the code doesn't just magically works. They can write a driver that supports CUDA, but they didn't for obvious reasons. If you are upset about it, use OpenCL. The choice is yours. Keep in mind that OpenCL libraries are mostly low level libraries which you will need to build everything from ground level up.

Not a simple aspect, and this is for the SAME platform
Not true again. The OS may be the same, the CPU may be the same, but the platform is not the same. Nvidia and ATI have different architecture. They both support OpenCL, but definitely not with the same backend code.

(a common thing, you know, that software written for windows be able to run on windows...)
ATI driver doesn't work for Nvidia video card. Explain why.

That translates into tons of code rewriting and duplication if you want it to be installed on a machine with a non-nVidia card.
Yeah. AMD and Nvidia have different set of drivers which more or less does the same thing. You have a problem with that?

This wouldn't be such a problem if CUDA cards had the majority market share but, guess what, they don't. In fact, they are a niche market. That means for a software developer to use CUDA they are eliminating a huge portion of the market from their customer base.
In case you didn't know. Instead of supporting CUDA, AMD chose to create their own version of the APIs, they called it "stream".

Read this if you think it if Nvidia's fault AMD doesn't support CUDA.
Why Won't ATI Support CUDA and PhysX?
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
Both are open source software.

Incorrect. They are open standards.
Most implementations are closed-source (made by the IHVs, as part of their drivers).
Ironically enough, the open source implementation used by open source OSes such as linux and the BSD's, is called MesaGL. They cannot actually call it OpenGL, because the name is trademarked. They need a license in order to call it OpenGL, which costs money.
Pretty much the same story as with Unix: you can only call your OS Unix if it is certified, which costs money. Ironically the most popular Unix-like OSes are free open source OSes which cannot be called Unix.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
Incorrect. They are open standards.
Most implementations are closed-source (made by the IHVs, as part of their drivers).
Ironically enough, the open source implementation used by open source OSes such as linux and the BSD's, is called MesaGL. They cannot actually call it OpenGL, because the name is trademarked. They need a license in order to call it OpenGL, which costs money.
Pretty much the same story as with Unix: you can only call your OS Unix if it is certified, which costs money. Ironically the most popular Unix-like OSes are free open source OSes which cannot be called Unix.
You are correct, they are standards, not softwares.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
In case you didn't know ATI/AMD had Stream out before CUDA was around.
Really? If so, then ATI/AMD is the first one of the 2 who started this proprietary stuff. Unfortunately, CUDA is the one that roll out first.
From pcper

One way or the other, ATI/AMD had its shot at this and missed it. CUDA stayed and now suddenly it is bad because stream isn't as well used as CUDA? And DirectX is far better?
 
Last edited:

Cogman

Lifer
Sep 19, 2000
10,286
145
106
OS is a software, just like CUDA and DirectX.

Both are open standards.
Umm.. No. CUDA is NOT a standard, it is a langauge. DirectX is not like CUDA, it is an API, not a language.

C++ is cross-platform, but many packages are platform specific which may or may not have a reason. Depending on what you are porting, it may be easier to start from sketch.
C++ is not cross-platform. It is a language. Languages aren't inherently cross platform or not, they are languages. If someone implements the standards of that language for a different platform then they facilitate the ability to use that language on a different platform, they don't make it "cross-platform".

that is no true. Thing of CUDA is a library. You can spend your time to replace this library with OpenCL libraries. This is no harder than porting something from PC to Mac.
False, CUDA is NOT a library, it IS a language (so is OpenCL and DirectCompute). You can't just say "Oh, plug in a OpenCL library instead and things will work!" because that isn't at all what is going on. The video card driver is actually receiving OpenCL/DirectCompute/Cuda code and compiling it in some fashion (Well CUDA could be precompiled as it monolithic in nature)

If you have an application developed in CUDA and you want to use it on an ATI card, tough cookies, you must rewrite the entire CUDA application in a language that the ATI card can handle. For some small accelleration problems, this isn't too big of a deal. But when you start doing things like making a video encoder, the expenses of using CUDA become astronomical if you want this program to be able to run on different video cards. To which case, how could you argue against using a language like DirectCompute or OpenCL instead? By using either exclusively, you would increase your platform compatibility 10 fold.

CUDA library doesn't work on ATI cards simply because ATI doesn't use CUDA cores, so the code doesn't just magically works. They can write a driver that supports CUDA, but they didn't for obvious reasons. If you are upset about it, use OpenCL. The choice is yours. Keep in mind that OpenCL libraries are mostly low level libraries which you will need to build everything from ground level up.
Again, OpenCL (Open Computing Language) is not a library, it is a language. ATI really can't just write a driver that supports CUDA, that would violate several of nVidias patents. CUDA is not an open language. DirectCompute and OpenCL are.

I think i was pretty clear when I said that CUDA could go DIAF.. I don't like it and will never use it. But I don't totally blame developers that do use it. They have a different target audience then I do (mainly, very homogeneous platforms). I really write code to be cross-operating system compatible, I don't really care if my code can run on linux (though most does) what I do care about is that it is capable of being run on all windows platforms, and not just 10%.

Not true again. The OS may be the same, the CPU may be the same, but the platform is not the same. Nvidia and ATI have different architecture. They both support OpenCL, but definitely not with the same backend code.
I really don't know how you associated this sentence with what I was saying. I was never trying to say that the OpenCL implementation for an nVidia card was the same as an ATI card...

ATI driver doesn't work for Nvidia video card. Explain why.
This is different from a driver aspect. My printer driver doesn't make my nVidia card run. While a driver is software, it is low level software and certainly not the general case of software that is developed. Take a web browser for example. If I wanted to do some fancy javascript math the utilized the GPU, somehow, and I wrote that portion in CUDA, all the sudden I've excluded this feature from everyone on the windows platform that doesn't have an nVidia CUDA card (90%). For something like a browser, this is a bad thing. Yet, if I use a language like OpenCL, I guarantee that 90% of the architectures out there can use the GPU acceleration without a hitch.

This is the point I was trying to make. Very few developers for consumer applications have any concern about the underlying architecture they are targeting. Using CUDA makes you have that concern.

Yeah. AMD and Nvidia have different set of drivers which more or less does the same thing. You have a problem with that?
yes, I do.

Think of it this way, what if nVidia made their own DirectX like API only for nVidia cards. Would I advocate that any developer use that API? Hell No. It would completely limit their applications to being only able to run on nVidia cards.


In case you didn't know. Instead of supporting CUDA, AMD chose to create their own version of the APIs, they called it "stream".

Read this if you think it if Nvidia's fault AMD doesn't support CUDA.
Why Won't ATI Support CUDA and PhysX?
[/quote]

From your own article
ATI would also be required to license PhysX in order to hardware accelerate it, of course, but Nvidia maintains that the licensing terms are extremely reasonable
Umm, yeah, All tech companies think their licensing terms are extremely reasonable... That doesn't mean they are affordable. The rest "Pennies per GPU shipped" is hogwash without nVidia actually releasing their terms of licensing to the general public, something they AREN'T ever going to do.

The article writer shows his bias and stupidity. Licensing can be VERY costly, just because a marketing spokesman from a competing company says it is affordable doesn't make it so. As for the "Its free to download" crap, what happens when ATI does implement it? They are completely at the mercy of nVidia for publications of new CUDA standards (and the accurate representation of those standards). I don't know if you were born yesterday, but investing money in something that is directly controlled by your direct competitor is a BAD thing in a capitalist society.

Why didn't nVidia support the Stream standard? The same arguments made in that article could have been made against nVidia for not using or licensing AMD technology/standards.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Actually, Cuda is a *framework*. What nVidia calls Cuda is a combination of nVidia's GPGPU architecture, and the software framework to drive it.
Indeed you CAN plug DirectCompute and OpenCL into Cuda, because that's what they DO.

The 'default' programming language for Cuda is called C for Cuda.

Think of Cuda as nVidia's .NET equivalent for GPGPU. .NET is also a framework upon which many languages can be implemented. You just need to write a compiler that is compatible with the .NET framework. The same goes for Cuda.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
Umm.. No. CUDA is NOT a standard, it is a langauge. DirectX is not like CUDA, it is an API, not a language.

Again, OpenCL (Open Computing Language) is not a library, it is a language
Sorry, but none of the above are languages. They are a set of extensions, and/or libraries depending on the language you use to them. New OpenGL extensions will be added when new technology arrives (like tessellation for example) so people can retrofit it into their existing program if they needed. The problem of DirectX is Dx10 or later extensions are not backward compatible with Dx9, meaning that your program that is writting in C with Dx10 extension won't work on Dx9 hardware/platform. CUDA is just another extension, just like OpenGL and DirectX.

So it looks like this:
Nvidia: OpenGL -> assembly code -> hardware
AMD: OpenGL -> assembly code -> hardware

Since the hardware architecture is different, the assembly code from the 2 vendors are different, but will do the same thing at the end. This is why people like to compare performance on vendors as their are difference is hardware architecture and vendors sometimes can optimize its performance by tuning those assembly code.

C++ is not cross-platform. It is a language.
Think of language is simply specific text format of a text file. It does nothing by itself. A compiler is what is important. The compiler turns whatever inside the text file into assembly code (put them in the language that hardware understands.) Since Mac and PC are different platforms(they don't speak the same language), different compilers are needed. There are C++ compilers for PC, and there are C++ compilers for Mac. Both compilers use the same set of source code but will create different executables, and the resulting executables will only run on the specified platform.

Languages aren't inherently cross platform or not, they are languages. If someone implements the standards of that language for a different platform then they facilitate the ability to use that language on a different platform, they don't make it "cross-platform".
Maybe you don't know what is the meaning of cross-platform.



From your own article
Umm, yeah, All tech companies think their licensing terms are extremely reasonable... That doesn't mean they are affordable. The rest "Pennies per GPU shipped" is hogwash without nVidia actually releasing their terms of licensing to the general public, something they AREN'T ever going to do.

The article writer shows his bias and stupidity. Licensing can be VERY costly, just because a marketing spokesman from a competing company says it is affordable doesn't make it so. As for the "Its free to download" crap, what happens when ATI does implement it? They are completely at the mercy of nVidia for publications of new CUDA standards (and the accurate representation of those standards). I don't know if you were born yesterday, but investing money in something that is directly controlled by your direct competitor is a BAD thing in a capitalist society.

Why didn't nVidia support the Stream standard? The same arguments made in that article could have been made against nVidia for not using or licensing AMD technology/standards.
Whether the writer is bias or not is subjective. The fact is, Nvidia didn't say "You can't use our CUDA extension", they say "You can use our CUDA extension with a license fee." You can say Nvidia priced the license sky high, but you really can't prove it. I can say AMD refused to support it, which I can prove easily.

Yes, Nvidia can acquire license from AMD for stream and use it too, and they too decided not to and instead unloading tons of money on CUDA programming. Again, these are business decisions. If you want to say CUDA is Nvidia's proprietary crap, then Stream is AMD's proprietary crap that is way behind Nvidia's proprietary crap. However you like to say it, nothing is free.

Other than OpenGL which all video card vendor supports, there are also OpenCL which all CPU/GPU vendors supports. However, performance and complexity are the keys. Programmers know OpenGL+OpenCL runs on all platform, but they only offers low-level API, meaning that I will need to build everything from sketches. CUDA and DirectX have much better APIs. They are fast and reliable, but platform specific (Don't ask my why I exclude Stream from the list).
 
Last edited:

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Yes, Nvidia can acquire license from AMD for stream and use it too, and they too decided not to and instead unloading tons of money on CUDA programming. Again, these are business decisions. If you want to say CUDA is Nvidia's proprietary crap, then Stream is AMD's proprietary crap that is way behind Nvidia's proprietary crap. However you like to say it, nothing is free.

Directcompute is right? I mean it goesnt cost either amd or nvidia anything... when its mircosoft doing the investing into the software, its just part of Directx and comes on any windows pc.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
Directcompute is right? I mean it goesnt cost either amd or nvidia anything... when its mircosoft doing the investing into the software, its just part of Directx and comes on any windows pc.
Both vendor supports DirectCompute, so there are no issue here. The issue is DirectCompute is a subset of DirectX11 with will only run on Window7 and vista, that means people who uses XP, Android, iOS, Linus, Ubuntu and Mac can't benefit from it. I beat not many companys have their server running under vista or window 7. That means DirectCompute ain't wanna work for them. CUDA on the other hand will before all they need to do is to plug a hardware into the server (That however may not be an option too).

For home user, what is easier to do, change OS or change video card?

As you can see there are more problems than solutions. I honestly don't see DirectCompute is a better solution compare to CUDA. I am fine on comparing different solutions, but starting this off as "CUDA is die" makes me laugh.
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
Sorry, but none of the above are languages. They are a set of extensions, and/or libraries depending on the language you use to them. New OpenGL extensions will be added when new technology arrives (like tessellation for example) so people can retrofit it into their existing program if they needed. The problem of DirectX is Dx10 or later extensions are not backward compatible with Dx9, meaning that your program that is writting in C with Dx10 extension won't work on Dx9 hardware/platform. CUDA is just another extension, just like OpenGL and DirectX.

OpenCL is a language. C for Cuda is a language (That I was regrettably mixing up terms). DirectCompute is a language. OpenGL and DirectX themselves are not programming language, though, they do have aspects that are programming languages (GLSL for example.) There is a big difference between what OpenCL is and what OpenGL. OpenGL is, for the most part, just an API. OpenCL is infact its own language. You can't compile OpenCL code with C++.

So it looks like this:
Nvidia: OpenGL -> assembly code -> hardware
AMD: OpenGL -> assembly code -> hardware
... Not quite, OpenGL is not a programming language. The code path for code with OpenGL functions in it, and OpenCL code is fairly different as well.

Since the hardware architecture is different, the assembly code from the 2 vendors are different, but will do the same thing at the end. This is why people like to compare performance on vendors as their are difference is hardware architecture and vendors sometimes can optimize its performance by tuning those assembly code.
ehhh... Not really. The architectures are fundamentally different. It isn't just a "They just tweak how the assembly code is written" sort of thing. The language that AMD and Nvidia GPUs speak is fundamentally different.

Think of language is simply specific text format of a text file. It does nothing by itself. A compiler is what is important. The compiler turns whatever inside the text file into assembly code (put them in the language that hardware understands.) Since Mac and PC are different platforms(they don't speak the same language), different compilers are needed. There are C++ compilers for PC, and there are C++ compilers for Mac. Both compilers use the same set of source code but will create different executables, and the resulting executables will only run on the specified platform.
I know exactly what a programming language is and how to use it. Go visit the programming forums here if you don't believe me. And you are wrong. Mac and PCs use the EXACT same assembly code but have different operating systems. (wasn't always true, but it is now. Macs use an x86 architecture provided by Intel). In other words, they DO speak the same language. The difference is in the way the operating system is built. You could possibly link code created for a windows machine to code created for a Mac machine. The problem comes in the difference between PE executables and whatever it is macs use.

But that is besides the point. My point was, and still is, You can't take code that was written in C for CUDA and run it on an AMD video card. You can't take code that was compiled for CUDA and run it on an AMD video card. You can take code that was written in OpenCL and run it on both an AMD video card and a nVideo card. That was my point.

Maybe you don't know what is the meaning of cross-platform.
I know the meaning quite well.

Whether the writer is bias or not is subjective. The fact is, Nvidia didn't say "You can't use our CUDA extension", they say "You can use our CUDA extension with a license fee." You can say Nvidia priced the license sky high, but you really can't prove it. I can say AMD refused to support it, which I can prove easily.
Can you prove that nVidia wouldn't sabotage AMD if they had accepted their licence agreement? AMD refusing to support CUDA was a smart thing to do.

Yes, Nvidia can acquire license from AMD for stream and use it too, and they too decided not to and instead unloading tons of money on CUDA programming. Again, these are business decisions. If you want to say CUDA is Nvidia's proprietary crap, then Stream is AMD's proprietary crap that is way behind Nvidia's proprietary crap. However you like to say it, nothing is free.
So, you are saying that AMD not using cuda was evil, but nVidia not using Stream was good? Please, elaborate how nVidias decision was wise while AMD's decision, which is exactly the same, was unwise.

Other than OpenGL which all video card vendor supports, there are also OpenCL which all CPU/GPU vendors supports. However, performance and complexity are the keys. Programmers know OpenGL+OpenCL runs on all platform, but they only offers low-level API, meaning that I will need to build everything from sketches.
Actually, OpenGL offers very complex capabilities and features that rival that of DirectX. I can't speak specifically to how OpenCL compares to C for Cuda, but from what I've seen of C for CUDA, I'm going to have to say that it is going to be quite similar. CUDA doesn't really offer a whole lot of advanced features, it is really pretty basic in the access that it gives to the GPU hardware.

CUDA and DirectX have much better APIs. They are fast and reliable, but platform specific (Don't ask my why I exclude Stream from the list).
This is entirely debatable. Many would argue that the DirectX API is prevalent not because it is better than OpenGL, but because microsoft offered a lot of breaks and discounts to companies that used DirectX for their applications.

From what I've seen of DirectX and OpenGL, they are very comparable in the features they offer (OpenGL through extensions usually beats DirectX to the punch when it comes to new features offered). GLSL and HLSL (of which, DirectCompute is really just a branch) is another story.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
OpenCL is a language.
Please read the following link from AMD
Introductory Tutorial to OpenCL
C for Cuda is a language (That I was regrettably mixing up terms).
http://en.wikipedia.org/wiki/CUDA
'C for CUDA' (C with NVIDIA extensions and certain restrictions)
DirectCompute is a language.
Direct Compute Example Code Listing

... You can't compile OpenCL code with C++.
Look that the 3 examples above.

... I know the meaning quite well.
okay...


Can you prove that nVidia wouldn't sabotage AMD if they had accepted their licence agreement? AMD refusing to support CUDA was a smart thing to do.
I can't prove that. I said AMD refused based on business decisions. You said it is due to high license fee.


So, you are saying that AMD not using cuda was evil, but nVidia not using Stream was good? Please, elaborate how nVidias decision was wise while AMD's decision, which is exactly the same, was unwise.
I didn't say that. I said AMD had Stream and Nvidia had CUDA. I said just becuse Stream failed doesn't mean CUDA is bad being a proprietary stuff. A far competition, Technical Knock Out.

Next round, CUDA vs DirectCompute vs OpenCL.


Actually, OpenGL offers very complex capabilities and features that rival that of DirectX. I can't speak specifically to how OpenCL compares to C for Cuda, but from what I've seen of C for CUDA, I'm going to have to say that it is going to be quite similar. CUDA doesn't really offer a whole lot of advanced features, it is really pretty basic in the access that it gives to the GPU hardware.
CUDA offers more compare to OpenCL simply because it is engineered around Nvidia's GPU, so it does have advantage in terms of utilization. It however, is very difficult to use. There is a CUDA debugger which helps developer. This debugger alone trumps OpenCL, DirectCompute, and Stream by miles IMO.

Now as it should be clear that AMD's hardware probably won't play at its best with CUDA shall they choose to implement it, but that is very different from say Nvidia is going to sabotage AMD. It should be clear that CUDA+Nvidia hardware beats Stream+AMD, that doesn't mean using a common extension will be best to the problem. For example:
CUDA+NVidia: 100% efficiency
Stream+AMD : 100% efficiency
DirectCompute: 50% efficiency on both vendors, 70% on Intel.
OpenCL: 45% efficiency on all vendors.

How did people came to the conclusion that CUDA is not open and it is bad is beyond me.
This is entirely debatable. Many would argue that the DirectX API is prevalent not because it is better than OpenGL, but because microsoft offered a lot of breaks and discounts to companies that used DirectX for their applications.

From what I've seen of DirectX and OpenGL, they are very comparable in the features they offer (OpenGL through extensions usually beats DirectX to the punch when it comes to new features offered). GLSL and HLSL (of which, DirectCompute is really just a branch) is another story.
I think you should not mix things up. CUDA is really for GPGPU computing, not to generate graphics. OpenGL and the rest of the DirectX11 has nothing to do with CUDA.
 
Last edited:

Cogman

Lifer
Sep 19, 2000
10,286
145
106
Please read the following link from AMD
Introductory Tutorial to OpenCL
From the link
OpenCL™ defines a C-like language for programming compute device programs. These programs are passed to the OpenCL™ runtime via API calls expecting values of type char *. Often, it is convenient to keep these programs in separate source files. For this and subsequent tutorials, I assume the device programs are stored in files with names of the form name_kernels.cl, where name varies, depending on the context, but the suffix _kernels.cl does not. The corresponding device programs are loaded at runtime and passed to the OpenCL™ API. There are many alternative approaches to this; this one is chosen for readability.


Again from the link..

CUDA (with compute capability 1.x) uses a recursion-free, function-pointer-free subset of the C language, plus some simple extensions. However, a single process must run spread across multiple disjoint memory spaces, unlike other C language runtime environments. Fermi GPUs now have (nearly) full support of C++.

This is an example which, sadly, doesn't show the actual directcompute code. It shows how to load in the HLSL.

Look that the 3 examples above.
Read your own links.


I can't prove that. I said AMD refused based on business decisions. You said it is due to high license fee.
I said the article writer was bias. You used that link in some strange way to try and make it look like AMD was stupid for not implementing CUDA.

I didn't say that. I said AMD had Stream and Nvidia had CUDA. I said just becuse Stream failed doesn't mean CUDA is bad being a proprietary stuff. A far competition, Technical Knock Out.
Sorry, I was reading too much into your post.


CUDA offers more compare to OpenCL simply because it is engineered around Nvidia's GPU, so it does have advantage in terms of utilization. It however, is very difficult to use. There is a CUDA debugger which helps developer. This debugger alone trumps OpenCL, DirectCompute, and Stream by miles IMO.

Now as it should be clear that AMD's hardware probably won't play at its best with CUDA shall they choose to implement it, but that is very different from say Nvidia is going to sabotage AMD. It should be clear that CUDA+Nvidia hardware beats Stream+AMD, that doesn't mean using a common extension will be best to the problem. For example:
CUDA+NVidia: 100% efficiency
Stream+AMD : 100% efficiency
DirectCompute: 50% efficiency on both vendors, 70% on Intel.
OpenCL: 45% efficiency on all vendors.
I don't know how you think you are deriving these numbers. As for the sabotage, while it is possible that nVidia wouldn't do it, it isn't guaranteed. AMD would be foolish to invest large amounts of money into a direct competitors technology.

How did people came to the conclusion that CUDA is not open and it is bad is beyond me.
I never said anything about it being open. I said it was limited to nVidia only platforms. How is that beyond you, it is a true statement.

I think you should not mix things up. CUDA is really for GPGPU computing, not to generate graphics. OpenGL and the rest of the DirectX11 has nothing to do with CUDA.
I never said CUDA was for generating graphics, You were the one that kept trying to pull in DirectX and OpenGL. I don't know if you somehow have short term memory loss, but it was you that kept trying to compare OpenCL, CUDA, and OpenGL.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
1)CUDA+NVidia: 100% efficiency
2)Stream+AMD : 100% efficiency
3)DirectCompute: 50% efficiency on both vendors, 70% on Intel.
4)OpenCL: 45% efficiency on all vendors.

I like option 3) best ^-^ and think the goal should be to make it as effective on both hardwares as possible. Even if its still beat by cuda/stream in effecienty, I think developers will go for it simply because of how many users, use windows and want a solution that ll work reguardless of whos card your useing.

Im probably biased because I use a ATI card, and Im not sure what stream can offer me as a casual user. I see cuda does have a few things for the casual user, and would love to see developers go with mircosofts direct compute and make that work so ati users could have that too.

Im rooting for mircrosoft to set the standart and win that market in the software GPGPU department. Since its a long term investment.... from mircosofts side... It seems likely that they ll gain more ground over time (there are alot of windows users).
 
Last edited:

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Is Intel's new Sandy Bridge decoder really fast enough to threaten CUDA or OpenCL?

EDIT: Nvm, I found this--->http://www.techradar.com/news/compu...ntel-chips-encode-hd-video-in-seconds--716246 (Apparently specific modifications to Windows 7 and third party software need to be made to gain the speed-up benefits)
If that article is accurate (which I suspect it is), then CUDA is in serious trouble. SB is going to wind up in a large majority of computers sold. Whatever standard it's using to encode video will wind up becoming the GPGPU standard IMO.

If SB can indeed encode video at such a blazing speed, that is a killer feature, to the point that I would probably purchase SB over Fusion, regardless of whether or not Fusion has better gaming performance.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
@Sickbeast,

Fusion is only for low end in the near future.
But yeah Intel make great CPUs... with hardware encodeing now thats faster than CUDA + GFX card. Im not sure how much of a threat it is, but it ll definately steal abit of cudas thunder, if your primary use of cuda is to help encode moves ect.