Schmide
Diamond Member
- Mar 7, 2002
- 5,745
- 1,036
- 126
In fact, the open source code idea has been there for years (Unix or Linux), which is dying. Point?
What what what what? Source? Linux dying? Do you work for SCO?
In fact, the open source code idea has been there for years (Unix or Linux), which is dying. Point?
Desktop Linux: The Dream is Dead, from pcwordWhat what what what? Source? Linux dying? Do you work for SCO?
Now can you give me a source showing that linux is not dying?
Robert Strohmeyer said:Linux has clearly asserted itself as a major platform that's here to stay. And of course, passionate open-source proponents will rightly stand by their favorite desktop distributions despite the challenges ahead.
From the same article?
End of the Road?
It has been a long trek since Linus Torvalds wrote the first Linux kernel as a college project in 1992, and the landscape has shifted considerably along the way. Despite grim prospects on the desktop, Linux has clearly asserted itself as a major platform that's here to stay. And of course, passionate open-source proponents will rightly stand by their favorite desktop distributions despite the challenges ahead.
But at this point in history, it's hard to deny the evidence: With stagnant market growth and inadequate content options compounded by industry inertia, Linux basically has no chance to rival Mac OS X, much less Windows.
Why didn't you quote the whole thing?
Okay then, so dying is different from barely alive. At least we can settle on the low representation on an OS that is open source all this years. I am not trying to crash Linux, I am trying to say these stuffs have been there for a long time, but not many are using them.Does that equal dying? Even with the extended quote?
Reductio Ad Absurdum?
Linux never ever got more than a few percent of the home market since its inception, but that doesn't equal dying.
Okay then, so dying is different from barely alive. At least we can settle on the low representation on an OS that is open source all this years. I am not trying to crash Linux, I am trying to say these stuffs have been there for a long time, but not many are using them.
That is operating system incompatibility, not hardware incompatibility. There is a big difference that I can't believe you don't really get.You forgot to mention Mac CPU. I still remember the days where people use floppy disks, and floppy that contain data from apple is not readable by Microsoft, and vice versa.
Many games run on PC, but not Mac. Where have you been all these years?
That is operating system incompatibility, not hardware incompatibility. There is a big difference that I can't believe you don't really get.
BTW, I said OpenCL, not OpenGL, there is a BIG difference.
Let me put it this way. I have a program written in C++ for windows, I want to port it over to a MAC. Would the process be expensive? Maybe, it depends on how well the underlying windowing system was written and how many native Win32 functions are being called. Would it be impossible? No, not at all.
Now lets say I have software written in CUDA that I want to run on a PC with an ATI card. What would that involve? Complete program rewriting. Not a simple aspect, and this is for the SAME platform (a common thing, you know, that software written for windows be able to run on windows...) That translates into tons of code rewriting and duplication if you want it to be installed on a machine with a non-nVidia card.
This wouldn't be such a problem if CUDA cards had the majority market share but, guess what, they don't. In fact, they are a niche market. That means for a software developer to use CUDA they are eliminating a huge portion of the market from their customer base.
OS is a software, just like CUDA and DirectX.That is operating system incompatibility, not hardware incompatibility. There is a big difference that I can't believe you don't really get.
Both are open standards.BTW, I said OpenCL, not OpenGL, there is a BIG difference.
C++ is cross-platform, but many packages are platform specific which may or may not have a reason. Depending on what you are porting, it may be easier to start from sketch.Let me put it this way. I have a program written in C++ for windows, I want to port it over to a MAC. Would the process be expensive? Maybe, it depends on how well the underlying windowing system was written and how many native Win32 functions are being called. Would it be impossible? No, not at all.
that is no true. Thing of CUDA is a library. You can spend your time to replace this library with OpenCL libraries. This is no harder than porting something from PC to Mac.Now lets say I have software written in CUDA that I want to run on a PC with an ATI card. What would that involve? Complete program rewriting.
Not true again. The OS may be the same, the CPU may be the same, but the platform is not the same. Nvidia and ATI have different architecture. They both support OpenCL, but definitely not with the same backend code.Not a simple aspect, and this is for the SAME platform
ATI driver doesn't work for Nvidia video card. Explain why.(a common thing, you know, that software written for windows be able to run on windows...)
Yeah. AMD and Nvidia have different set of drivers which more or less does the same thing. You have a problem with that?That translates into tons of code rewriting and duplication if you want it to be installed on a machine with a non-nVidia card.
In case you didn't know. Instead of supporting CUDA, AMD chose to create their own version of the APIs, they called it "stream".This wouldn't be such a problem if CUDA cards had the majority market share but, guess what, they don't. In fact, they are a niche market. That means for a software developer to use CUDA they are eliminating a huge portion of the market from their customer base.
Both are open source software.
You are correct, they are standards, not softwares.Incorrect. They are open standards.
Most implementations are closed-source (made by the IHVs, as part of their drivers).
Ironically enough, the open source implementation used by open source OSes such as linux and the BSD's, is called MesaGL. They cannot actually call it OpenGL, because the name is trademarked. They need a license in order to call it OpenGL, which costs money.
Pretty much the same story as with Unix: you can only call your OS Unix if it is certified, which costs money. Ironically the most popular Unix-like OSes are free open source OSes which cannot be called Unix.
case you didn't know. Instead of supporting CUDA, AMD chose to create their own version of the APIs, they called it "stream".
Really? If so, then ATI/AMD is the first one of the 2 who started this proprietary stuff. Unfortunately, CUDA is the one that roll out first.In case you didn't know ATI/AMD had Stream out before CUDA was around.
Umm.. No. CUDA is NOT a standard, it is a langauge. DirectX is not like CUDA, it is an API, not a language.OS is a software, just like CUDA and DirectX.
Both are open standards.
C++ is not cross-platform. It is a language. Languages aren't inherently cross platform or not, they are languages. If someone implements the standards of that language for a different platform then they facilitate the ability to use that language on a different platform, they don't make it "cross-platform".C++ is cross-platform, but many packages are platform specific which may or may not have a reason. Depending on what you are porting, it may be easier to start from sketch.
False, CUDA is NOT a library, it IS a language (so is OpenCL and DirectCompute). You can't just say "Oh, plug in a OpenCL library instead and things will work!" because that isn't at all what is going on. The video card driver is actually receiving OpenCL/DirectCompute/Cuda code and compiling it in some fashion (Well CUDA could be precompiled as it monolithic in nature)that is no true. Thing of CUDA is a library. You can spend your time to replace this library with OpenCL libraries. This is no harder than porting something from PC to Mac.
Again, OpenCL (Open Computing Language) is not a library, it is a language. ATI really can't just write a driver that supports CUDA, that would violate several of nVidias patents. CUDA is not an open language. DirectCompute and OpenCL are.CUDA library doesn't work on ATI cards simply because ATI doesn't use CUDA cores, so the code doesn't just magically works. They can write a driver that supports CUDA, but they didn't for obvious reasons. If you are upset about it, use OpenCL. The choice is yours. Keep in mind that OpenCL libraries are mostly low level libraries which you will need to build everything from ground level up.
I really don't know how you associated this sentence with what I was saying. I was never trying to say that the OpenCL implementation for an nVidia card was the same as an ATI card...Not true again. The OS may be the same, the CPU may be the same, but the platform is not the same. Nvidia and ATI have different architecture. They both support OpenCL, but definitely not with the same backend code.
This is different from a driver aspect. My printer driver doesn't make my nVidia card run. While a driver is software, it is low level software and certainly not the general case of software that is developed. Take a web browser for example. If I wanted to do some fancy javascript math the utilized the GPU, somehow, and I wrote that portion in CUDA, all the sudden I've excluded this feature from everyone on the windows platform that doesn't have an nVidia CUDA card (90%). For something like a browser, this is a bad thing. Yet, if I use a language like OpenCL, I guarantee that 90% of the architectures out there can use the GPU acceleration without a hitch.ATI driver doesn't work for Nvidia video card. Explain why.
yes, I do.Yeah. AMD and Nvidia have different set of drivers which more or less does the same thing. You have a problem with that?
[/quote]In case you didn't know. Instead of supporting CUDA, AMD chose to create their own version of the APIs, they called it "stream".
Read this if you think it if Nvidia's fault AMD doesn't support CUDA.
Why Won't ATI Support CUDA and PhysX?
Umm, yeah, All tech companies think their licensing terms are extremely reasonable... That doesn't mean they are affordable. The rest "Pennies per GPU shipped" is hogwash without nVidia actually releasing their terms of licensing to the general public, something they AREN'T ever going to do.ATI would also be required to license PhysX in order to hardware accelerate it, of course, but Nvidia maintains that the licensing terms are extremely reasonable
Umm.. No. CUDA is NOT a standard, it is a langauge. DirectX is not like CUDA, it is an API, not a language.
Sorry, but none of the above are languages. They are a set of extensions, and/or libraries depending on the language you use to them. New OpenGL extensions will be added when new technology arrives (like tessellation for example) so people can retrofit it into their existing program if they needed. The problem of DirectX is Dx10 or later extensions are not backward compatible with Dx9, meaning that your program that is writting in C with Dx10 extension won't work on Dx9 hardware/platform. CUDA is just another extension, just like OpenGL and DirectX.Again, OpenCL (Open Computing Language) is not a library, it is a language
Think of language is simply specific text format of a text file. It does nothing by itself. A compiler is what is important. The compiler turns whatever inside the text file into assembly code (put them in the language that hardware understands.) Since Mac and PC are different platforms(they don't speak the same language), different compilers are needed. There are C++ compilers for PC, and there are C++ compilers for Mac. Both compilers use the same set of source code but will create different executables, and the resulting executables will only run on the specified platform.C++ is not cross-platform. It is a language.
Maybe you don't know what is the meaning of cross-platform.Languages aren't inherently cross platform or not, they are languages. If someone implements the standards of that language for a different platform then they facilitate the ability to use that language on a different platform, they don't make it "cross-platform".
Whether the writer is bias or not is subjective. The fact is, Nvidia didn't say "You can't use our CUDA extension", they say "You can use our CUDA extension with a license fee." You can say Nvidia priced the license sky high, but you really can't prove it. I can say AMD refused to support it, which I can prove easily.From your own article
Umm, yeah, All tech companies think their licensing terms are extremely reasonable... That doesn't mean they are affordable. The rest "Pennies per GPU shipped" is hogwash without nVidia actually releasing their terms of licensing to the general public, something they AREN'T ever going to do.
The article writer shows his bias and stupidity. Licensing can be VERY costly, just because a marketing spokesman from a competing company says it is affordable doesn't make it so. As for the "Its free to download" crap, what happens when ATI does implement it? They are completely at the mercy of nVidia for publications of new CUDA standards (and the accurate representation of those standards). I don't know if you were born yesterday, but investing money in something that is directly controlled by your direct competitor is a BAD thing in a capitalist society.
Why didn't nVidia support the Stream standard? The same arguments made in that article could have been made against nVidia for not using or licensing AMD technology/standards.
Yes, Nvidia can acquire license from AMD for stream and use it too, and they too decided not to and instead unloading tons of money on CUDA programming. Again, these are business decisions. If you want to say CUDA is Nvidia's proprietary crap, then Stream is AMD's proprietary crap that is way behind Nvidia's proprietary crap. However you like to say it, nothing is free.
Both vendor supports DirectCompute, so there are no issue here. The issue is DirectCompute is a subset of DirectX11 with will only run on Window7 and vista, that means people who uses XP, Android, iOS, Linus, Ubuntu and Mac can't benefit from it. I beat not many companys have their server running under vista or window 7. That means DirectCompute ain't wanna work for them. CUDA on the other hand will before all they need to do is to plug a hardware into the server (That however may not be an option too).Directcompute is right? I mean it goesnt cost either amd or nvidia anything... when its mircosoft doing the investing into the software, its just part of Directx and comes on any windows pc.
Sorry, but none of the above are languages. They are a set of extensions, and/or libraries depending on the language you use to them. New OpenGL extensions will be added when new technology arrives (like tessellation for example) so people can retrofit it into their existing program if they needed. The problem of DirectX is Dx10 or later extensions are not backward compatible with Dx9, meaning that your program that is writting in C with Dx10 extension won't work on Dx9 hardware/platform. CUDA is just another extension, just like OpenGL and DirectX.
... Not quite, OpenGL is not a programming language. The code path for code with OpenGL functions in it, and OpenCL code is fairly different as well.So it looks like this:
Nvidia: OpenGL -> assembly code -> hardware
AMD: OpenGL -> assembly code -> hardware
ehhh... Not really. The architectures are fundamentally different. It isn't just a "They just tweak how the assembly code is written" sort of thing. The language that AMD and Nvidia GPUs speak is fundamentally different.Since the hardware architecture is different, the assembly code from the 2 vendors are different, but will do the same thing at the end. This is why people like to compare performance on vendors as their are difference is hardware architecture and vendors sometimes can optimize its performance by tuning those assembly code.
I know exactly what a programming language is and how to use it. Go visit the programming forums here if you don't believe me. And you are wrong. Mac and PCs use the EXACT same assembly code but have different operating systems. (wasn't always true, but it is now. Macs use an x86 architecture provided by Intel). In other words, they DO speak the same language. The difference is in the way the operating system is built. You could possibly link code created for a windows machine to code created for a Mac machine. The problem comes in the difference between PE executables and whatever it is macs use.Think of language is simply specific text format of a text file. It does nothing by itself. A compiler is what is important. The compiler turns whatever inside the text file into assembly code (put them in the language that hardware understands.) Since Mac and PC are different platforms(they don't speak the same language), different compilers are needed. There are C++ compilers for PC, and there are C++ compilers for Mac. Both compilers use the same set of source code but will create different executables, and the resulting executables will only run on the specified platform.
I know the meaning quite well.Maybe you don't know what is the meaning of cross-platform.
Can you prove that nVidia wouldn't sabotage AMD if they had accepted their licence agreement? AMD refusing to support CUDA was a smart thing to do.Whether the writer is bias or not is subjective. The fact is, Nvidia didn't say "You can't use our CUDA extension", they say "You can use our CUDA extension with a license fee." You can say Nvidia priced the license sky high, but you really can't prove it. I can say AMD refused to support it, which I can prove easily.
So, you are saying that AMD not using cuda was evil, but nVidia not using Stream was good? Please, elaborate how nVidias decision was wise while AMD's decision, which is exactly the same, was unwise.Yes, Nvidia can acquire license from AMD for stream and use it too, and they too decided not to and instead unloading tons of money on CUDA programming. Again, these are business decisions. If you want to say CUDA is Nvidia's proprietary crap, then Stream is AMD's proprietary crap that is way behind Nvidia's proprietary crap. However you like to say it, nothing is free.
Actually, OpenGL offers very complex capabilities and features that rival that of DirectX. I can't speak specifically to how OpenCL compares to C for Cuda, but from what I've seen of C for CUDA, I'm going to have to say that it is going to be quite similar. CUDA doesn't really offer a whole lot of advanced features, it is really pretty basic in the access that it gives to the GPU hardware.Other than OpenGL which all video card vendor supports, there are also OpenCL which all CPU/GPU vendors supports. However, performance and complexity are the keys. Programmers know OpenGL+OpenCL runs on all platform, but they only offers low-level API, meaning that I will need to build everything from sketches.
This is entirely debatable. Many would argue that the DirectX API is prevalent not because it is better than OpenGL, but because microsoft offered a lot of breaks and discounts to companies that used DirectX for their applications.CUDA and DirectX have much better APIs. They are fast and reliable, but platform specific (Don't ask my why I exclude Stream from the list).
Please read the following link from AMDOpenCL is a language.
http://en.wikipedia.org/wiki/CUDAC for Cuda is a language (That I was regrettably mixing up terms).
'C for CUDA' (C with NVIDIA extensions and certain restrictions)
Direct Compute Example Code ListingDirectCompute is a language.
Look that the 3 examples above.... You can't compile OpenCL code with C++.
okay...... I know the meaning quite well.
I can't prove that. I said AMD refused based on business decisions. You said it is due to high license fee.Can you prove that nVidia wouldn't sabotage AMD if they had accepted their licence agreement? AMD refusing to support CUDA was a smart thing to do.
I didn't say that. I said AMD had Stream and Nvidia had CUDA. I said just becuse Stream failed doesn't mean CUDA is bad being a proprietary stuff. A far competition, Technical Knock Out.So, you are saying that AMD not using cuda was evil, but nVidia not using Stream was good? Please, elaborate how nVidias decision was wise while AMD's decision, which is exactly the same, was unwise.
CUDA offers more compare to OpenCL simply because it is engineered around Nvidia's GPU, so it does have advantage in terms of utilization. It however, is very difficult to use. There is a CUDA debugger which helps developer. This debugger alone trumps OpenCL, DirectCompute, and Stream by miles IMO.Actually, OpenGL offers very complex capabilities and features that rival that of DirectX. I can't speak specifically to how OpenCL compares to C for Cuda, but from what I've seen of C for CUDA, I'm going to have to say that it is going to be quite similar. CUDA doesn't really offer a whole lot of advanced features, it is really pretty basic in the access that it gives to the GPU hardware.
I think you should not mix things up. CUDA is really for GPGPU computing, not to generate graphics. OpenGL and the rest of the DirectX11 has nothing to do with CUDA.This is entirely debatable. Many would argue that the DirectX API is prevalent not because it is better than OpenGL, but because microsoft offered a lot of breaks and discounts to companies that used DirectX for their applications.
From what I've seen of DirectX and OpenGL, they are very comparable in the features they offer (OpenGL through extensions usually beats DirectX to the punch when it comes to new features offered). GLSL and HLSL (of which, DirectCompute is really just a branch) is another story.
From the linkPlease read the following link from AMD
Introductory Tutorial to OpenCL
OpenCL defines a C-like language for programming compute device programs. These programs are passed to the OpenCL runtime via API calls expecting values of type char *. Often, it is convenient to keep these programs in separate source files. For this and subsequent tutorials, I assume the device programs are stored in files with names of the form name_kernels.cl, where name varies, depending on the context, but the suffix _kernels.cl does not. The corresponding device programs are loaded at runtime and passed to the OpenCL API. There are many alternative approaches to this; this one is chosen for readability.
CUDA (with compute capability 1.x) uses a recursion-free, function-pointer-free subset of the C language, plus some simple extensions. However, a single process must run spread across multiple disjoint memory spaces, unlike other C language runtime environments. Fermi GPUs now have (nearly) full support of C++.
This is an example which, sadly, doesn't show the actual directcompute code. It shows how to load in the HLSL.
Read your own links.Look that the 3 examples above.
I said the article writer was bias. You used that link in some strange way to try and make it look like AMD was stupid for not implementing CUDA.I can't prove that. I said AMD refused based on business decisions. You said it is due to high license fee.
Sorry, I was reading too much into your post.I didn't say that. I said AMD had Stream and Nvidia had CUDA. I said just becuse Stream failed doesn't mean CUDA is bad being a proprietary stuff. A far competition, Technical Knock Out.
CUDA offers more compare to OpenCL simply because it is engineered around Nvidia's GPU, so it does have advantage in terms of utilization. It however, is very difficult to use. There is a CUDA debugger which helps developer. This debugger alone trumps OpenCL, DirectCompute, and Stream by miles IMO.
I don't know how you think you are deriving these numbers. As for the sabotage, while it is possible that nVidia wouldn't do it, it isn't guaranteed. AMD would be foolish to invest large amounts of money into a direct competitors technology.Now as it should be clear that AMD's hardware probably won't play at its best with CUDA shall they choose to implement it, but that is very different from say Nvidia is going to sabotage AMD. It should be clear that CUDA+Nvidia hardware beats Stream+AMD, that doesn't mean using a common extension will be best to the problem. For example:
CUDA+NVidia: 100% efficiency
Stream+AMD : 100% efficiency
DirectCompute: 50% efficiency on both vendors, 70% on Intel.
OpenCL: 45% efficiency on all vendors.
I never said anything about it being open. I said it was limited to nVidia only platforms. How is that beyond you, it is a true statement.How did people came to the conclusion that CUDA is not open and it is bad is beyond me.
I never said CUDA was for generating graphics, You were the one that kept trying to pull in DirectX and OpenGL. I don't know if you somehow have short term memory loss, but it was you that kept trying to compare OpenCL, CUDA, and OpenGL.I think you should not mix things up. CUDA is really for GPGPU computing, not to generate graphics. OpenGL and the rest of the DirectX11 has nothing to do with CUDA.
If that article is accurate (which I suspect it is), then CUDA is in serious trouble. SB is going to wind up in a large majority of computers sold. Whatever standard it's using to encode video will wind up becoming the GPGPU standard IMO.Is Intel's new Sandy Bridge decoder really fast enough to threaten CUDA or OpenCL?
EDIT: Nvm, I found this--->http://www.techradar.com/news/compu...ntel-chips-encode-hd-video-in-seconds--716246 (Apparently specific modifications to Windows 7 and third party software need to be made to gain the speed-up benefits)