Is ATI clueless about GPU computing ?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Both sides have their issues, but in my experience the ATi driver is far far behind Nvidias.
Well, you are one of the exceptions then because most everybody else will disagree and say that ATi and Nvidia are on roughly equal footing, driver wise.

Ive actually downgraded from a HD 4890 to a 8800GT and havent looked back.
That's ironic because I replaced an 8800GT with a 4890 and haven't looked back, either. In fact, my overall experience with the 4890 has been better than the 8800GT because I no longer have to put up with, "Nvlddmkm has stopped responding" errors.

Due to the driver I actually display more FPS from the older, less powerfull 8800GT than the 4890. I will actually repeat that since I think its kinda funny and might make someone mad even though its 100% true and I have the benchmarks to back it up.

Under my settings, a 8800GT 512 diplays more FPS than ANY ATi card.
Ben90 said:
The maximum framerate an ATi card will allow on my monitor is 120hz. By changing things like the front porch I am able to "overclock" my monitor up to 140hz using Nvidia drivers, which is not possible with ATi.

While the 4890 undoubtably renders at a higher framerate, the 8800GT is ultimately displaying more frames per second.
And this is what you base "I actually display more FPS with my 8800GT" on? Can you visually tell the difference between 120Hz and 140Hz?

I notice you didn't mention the fact that when you dial the graphics settings up a few notches that the 4890 will display more FPS than the 8800GT card. Or do you prefer to play games at low enough settings to get 140+ FPS all the time?

I think most people would call that a "non-issue".
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
This part I would agree with... but the thing is that AMD itself might not be totally convinced of that.
AMD marketing has been boasting GPGPU features of their GPUs for quite a while now, and with their upcoming Fusion products, it would appear that they again want to market the GPGPU-side of things.

Of course they will market it. If they got any GPGPU-functionality, they will market it. That's how it works.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Of course they will market it. If they got any GPGPU-functionality, they will market it. That's how it works.

Problem is that they don't.
They have an OpenCL driver which is only included with the SDK, not available to regular end-users.
And the drivers offered to end-users are not tested with the SDK, as mentioned in the release notes.

Aside from that, they also promoted Havok and Bullet with OpenCL, yet both are vapourware.

I think at the very least, if you market OpenCL capabilities, you:
1) Supply an OpenCL runtime to end-users
2) Include testing of this OpenCL runtime in your driver validation procedure.

In my country, they could probably be sued because of false advertising, because of this.
In my country, AMD was also sued because they advertised that their Athlon64 processors would prevent viruses. A rather 'creative' interpretation of their No-Execute bit. Needless to say, AMD lost, and the advertisement had to be pulled from the airwaves.
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
Problem is that they don't.
They have an OpenCL driver which is only included with the SDK, not available to regular end-users.
And the drivers offered to end-users are not tested with the SDK, as mentioned in the release notes.

Aside from that, they also promoted Havok and Bullet with OpenCL, yet both are vapourware.

I think at the very least, if you market OpenCL capabilities, you:
1) Supply an OpenCL runtime to end-users
2) Include testing of this OpenCL runtime in your driver validation procedure.

In my country, they could probably be sued because of false advertising, because of this.
In my country, AMD was also sued because they advertised that their Athlon64 processors would prevent viruses. A rather 'creative' interpretation of their No-Execute bit. Needless to say, AMD lost, and the advertisement had to be pulled from the airwaves.

So go sue them. I don't work for AMD, I don't really care. But what you think you should have in order to market is not what you need to have in order to market it.
 

Hard Ball

Senior member
Jul 3, 2005
594
0
0
This part I would agree with... but the thing is that AMD itself might not be totally convinced of that.
AMD marketing has been boasting GPGPU features of their GPUs for quite a while now, and with their upcoming Fusion products, it would appear that they again want to market the GPGPU-side of things.

I don't really see them catching up anytime soon. In the areas that I deal with on a regular basis: AI and microarchitecture, NV's GPGPU is a well known and well implemented solution. There is a popular course here on GPGPU specifically targets CUDA programming environment:
http://courses.ece.illinois.edu/ece498/al/

While most people, outside of those directly study GPGPU and parallel architecture, no one has really heard of ATI stream at all, let alone used it.

In order to make their efforts viable, they need to sink large sums into collaborative projects, and to provide extensive engineering resources when necessary. But they have done very little to none of that, at least in the context that I can see. I'm only familiar with the academic side of things, but the fundings from two companies must have a delta of an order of magnitude or more, with no sign of changing any time soon.
 

Dark Shroud

Golden Member
Mar 26, 2010
1,576
1
0
Just to throw this out there, the next version of Adobe is supposed to be switching from Cuda to OpenCL.

Also from my limited knowledge the new "Intel HD" video core doesn't support OpenCL but does support DirectCompute. Which would make sense with Microsoft leaning towards DirectCompute as well.

So AMD/ATI taking a moment to see where Microsoft & Intel are moving makes perfect sense as they work on Southern Islands. Meanwhile Nvidia has to deal with all the issues with FERMI.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Just to throw this out there, the next version of Adobe is supposed to be switching from Cuda to OpenCL.

Which still wouldn't help AMD, since they don't support OpenCL yet.
Besides, I wonder if this really is a switch, or if they're going to ADD OpenCL, but have Cuda as preferred API for nVidia.
I don't really see why they should chuck the Cuda code they already have. Especially when it's probably going to run faster on nVidia hardware than OpenCL anyway. That'd just be a waste of the resources they have invested in it.

Also from my limited knowledge the new "Intel HD" video core doesn't support OpenCL but does support DirectCompute. Which would make sense with Microsoft leaning towards DirectCompute as well.

As far as I know, Intel supports neither at this point. Intel has always spoken out their support for OpenCL though.
If their hardware does DirectCompute, then it will be capable of OpenCL aswell, as they are very close together in terms of features. OpenCL should just be a driver update away.

So AMD/ATI taking a moment to see where Microsoft & Intel are moving makes perfect sense as they work on Southern Islands. Meanwhile Nvidia has to deal with all the issues with FERMI.

Doesn't make sense to me, not when the competition is running away with their own GPGPU solution and have the support of large and influential companies such as Adobe.
 

lopri

Elite Member
Jul 27, 2002
13,329
709
126
Well, this thread has already turned into a usual 'fest despite Modelwork's sincere concerns/issues on current state of AMD's GPGPU support. I suspect most (including myself) do not have her/his expertise in the field (e.g. the links he provided) to tell what's exactly what.

Said that, it doesn't surprise me one bit that NV is leading in GPGPU land. With the declining PC gaming and imminent arrival of CPU-GPU hybrid parts from AMD/Intel, it is NV that needs to tackle a new market for its continuing growth. And while NV's efforts do benefit those who use CUDA, no one (Intel, AMD, HP, IBM, Apple, etc.) wants to give NV a leg up on themselves for unknown future. Whether NV will succeed eventually, I don't know. But for now, the answer is simple for consumers: If your work and apps can benefit from CUDA, vote with your wallet.

It is their corporate culture. You see it with AMD period. Compiler support for AMD cpus? They just sit back and hope the DOJ will force Intel to optimize their own compiler for AMD's chips.

ATI isn't the outlier, its the whole damn company that has always had this weird "build it and they will come" 'tude towards compiler/software/developer support (and marketing for that matter).

I appreciate that you're being quite candid here instead of bringing up some shareholder theories, despite being off-topic. Intel's actions and inactions = legitimate and smart choices based on fab economy and shareholder rationals. AMD's actions and inactions = Corporate "culture" (for the record I don't know what exactly that "culture" might be - I always thought corporations are acting on behalf of shareholders, i.e. to maximize profits)

Just to throw this out there, the next version of Adobe is supposed to be switching from Cuda to OpenCL.
Isn't Apple supposed to be a player in this field as well? I've recently started my way into the dark side (that is, Apple..) thanks to some Apple stuff I've gotten for free, so I did kill some time @Apple.com, and noticed that Apple dropped NV from Mac Pro as well as iMac. As a matter of fact, it's dropping NV from its mobile products, too. From what I can see only C2D-based systems are equipped with NV graphics and those are being replaced by i3/i5/i7-based systems one by one. By fall refresh NV might not exist in Apple's entire product line-ups. (Edit: I was wrong - Apple indeed ships MacBook Pro with NV Graphics and those are Nehalem-based ones)

There. I had to do some off-topic balancing. ^_^

On topic: I think the issues the OP raise is real and legitimate. But I honestly think not many people here (again, including myself) are understanding the pain s/he experiences, thus most of us can only repeat the generalized AMD v. NV arguments (and even some AMD v. Intel). Hopefully s/he finds a more meaningful way to influence on AMD's lack of GPGPU support.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
1
0
I appreciate that you're being quite candid here instead of bringing up some shareholder theories, despite being off-topic. Intel's actions and inactions = legitimate and smart choices based on fab economy and shareholder rationals. AMD's actions and inactions = Corporate "culture" (for the record I don't know what exactly that "culture" might be - I always thought corporations are acting on behalf of shareholders, i.e. to maximize profits)

Intel's side is corporate culture just as much as AMD's side.
It's a result of how a company is structured, and what strategies they choose to follow based on this structure.
AMD's software department simply isn't as large as that of Intel or nVidia.
That is a choice that AMD once made, and that they now have to work with. Which results in different choices than Intel/nVidia in similar situations (but still aimed at maximizing profit, based on their capabilities).

What we are looking for here is a change in culture: AMD expanding their software department, and more actively supporting their own CPUs and GPUs.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
What we are looking for here is a change in culture: AMD expanding their software department, and more actively supporting their own CPUs and GPUs.

This is what I do not get. Here you have a company worth billions yet they can not hire 10 guys and dedicate them to developer support of their cards ?

While many people do not buy their cards for GPU computing you have to wonder how much value the press has when you see articles that say xxx app has CUDA support but ATI is nowhere to be found. The more products a companies hardware is promoted for the more a consumer wants to purchase that hardware.

I also have to ask ATI what they intend to do here because Autodesk has over the past couple years began selling Mac versions of their software. Software that could use CUDA but Apple has said they are going to go with ATI . Maybe ATI is going to support Mac in this area but not PC.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
This is what I do not get. Here you have a company worth billions yet they can not hire 10 guys and dedicate them to developer support of their cards ?

Well, perhaps they did hire new guys, but they are facing the mythical man-month problem...
That's the problem with culture, you can't just change it overnight.
nVidia didn't start their software department with Cuda obviously... from day 1, nVidia has always valued software/driver support greatly. They have always had the best OpenGL support of all consumer cards, they have developed the most OpenGL extensions, lots of presentations/samples/SDK etc, nVidia worked closely with Microsoft to develop HLSL for DirectX, and made their own Cg language based on that, which was compatible with both D3D and OpenGL (long before OpenGL standardized GLSL).
They have a lot of experience in developing languages, compilers etc (and the management that goes with such projects).

It will probably take ATi years to change this culture to something more similar to Intel and nVidia.
 

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
This is what I do not get. Here you have a company worth billions yet they can not hire 10 guys and dedicate them to developer support of their cards ?

While many people do not buy their cards for GPU computing you have to wonder how much value the press has when you see articles that say xxx app has CUDA support but ATI is nowhere to be found. The more products a companies hardware is promoted for the more a consumer wants to purchase that hardware.

I also have to ask ATI what they intend to do here because Autodesk has over the past couple years began selling Mac versions of their software. Software that could use CUDA but Apple has said they are going to go with ATI . Maybe ATI is going to support Mac in this area but not PC.

I agree with you that this is a strange way of doing things. Hopefully the hire of the nVidia VP to head up customer software support is a sign of AMD/ATI trying to change their culture in this area.

I can say that I have hope, after seeing a company that I work with go through a similar change, where I can actually see tangible differences; even though they had a long standing culture of doing things a different way. Sure, they aren't particularly great at the new method, and plenty of people go back to the way they are used to when things get tough, but I can really see a slow change in the right direction for them.
 

akugami

Diamond Member
Feb 14, 2005
6,210
2,552
136
Both sides have their issues, but in my experience the ATi driver is far far behind Nvidias.

nVidia has had just as much issues as ATI in the last roughly 5 years. ATI's drivers are not far far behind nVidia's.

Ive actually downgraded from a HD 4890 to a 8800GT and havent looked back. Due to the driver I actually display more FPS from the older, less powerfull 8800GT than the 4890. I will actually repeat that since I think its kinda funny and might make someone mad even though its 100% true and I have the benchmarks to back it up.

Under my settings, a 8800GT 512 diplays more FPS than ANY ATi card.

Now you've made me very curious. What are your hardware specs? Specifically what monitor are you using? Your other post lists the fact that the 4890 was only displaying 120hz while the 8800 was able to hit 140hz. I'm asking because the overwhelming majority of consumer LCD's can't do 120hz much less 140hz.

Many LCD's can do 75hz in an overdrive state but it's not officially specced for it. I believe it wasn't until nVidia's push for 3D stereoscopic gaming that there was any real push by the LCD manufacturer's to get 120hz monitors out. But they're still in the minority.

More likely than not the game you are playing is bottlenecked by the CPU and all modern mid range or higher GPU's can easily handle it.

You're also the first person I've seen that downgraded to an 8800 cause you're saying it was outperforming the 4890. I actually upgraded from an 8800GTS to a 4870 and found the performance boost quite satisfactory.

Sorry, but a GRAPHICS card company that has problems with a 7 year old standard (HDMI, specifically the EDID portion) SUCKS in my book.
Nothing flammatory about that.

We'll just ignore some of the nVidia has had issues with their cards over the years such as notoriously bad display drivers when Vista first came out. The state of the Vista drivers was actually worse than ATI's Vista drivers and I was using an 8800 at the time. We'll also ignore the features nVidia promised on their cards that were never present (granted ATI is no better in this regards). We'll also ignore the video card killing drivers. ATI sucks. No question about it.
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
This is what I do not get. Here you have a company worth billions yet they can not hire 10 guys and dedicate them to developer support of their cards ?
I think you're heading towards the right conclusion, but not necessarily the right train of thought.

It seems like AMD was pushing GPGPU much harder around the Radeon 5000 series launch and then suddenly got very quiet. Certainly they could be putting more development in to the software side, but after the 5000 series launch why would they? TSMC struggled to get chips out for months and AMD's primary competition (NVIDIA) was months late. As far as I know AMD is still selling Cypress chips as fast as they can make them.

If you want a serious GPGPU market that would mean allocating chips from Radeon cards to FireStream cards - FireStream cards are more profitable, but how much money would you have to sink in to the software side to match this serious effort? NVIDIA believes the GPGPU market will be huge, but even they will admit that right now it's tiny. AMD did finally launch FireStream cards, but that was very recently: late June to be precise (and on a quick search I can't find any).

My point being that my impression of things is that AMD has deprioritized GPGPU at the moment. It's definitely lazy and in the long-run it may prove to be a bad idea, but short-term the company doesn't have anything to gain when they're already selling chips as fast as they can make them. Thus my impression is that the company is effectively deciding to sit out the GPGPU market for a year (or more) and wait to see how things go. Certainly it's still small enough that there's still time to jump in later.

So to answer the original question, no, they're not clueless. They're just disinterested in the GPGPU market at the moment. They apparently have other things they'd rather be doing.

Isn't Apple supposed to be a player in this field as well? I've recently started my way into the dark side (that is, Apple..) thanks to some Apple stuff I've gotten for free, so I did kill some time @Apple.com, and noticed that Apple dropped NV from Mac Pro as well as iMac. As a matter of fact, it's dropping NV from its mobile products, too. From what I can see only C2D-based systems are equipped with NV graphics and those are being replaced by i3/i5/i7-based systems one by one. By fall refresh NV might not exist in Apple's entire product line-ups. (Edit: I was wrong - Apple indeed ships MacBook Pro with NV Graphics and those are Nehalem-based ones)
A small note from someone who has watched Apple for many, many years: don't pay attention to their hardware choices, watch what they're doing on the software side. Apple strives to be "nimble" - their computer plans never rely on a single company. They've switched back and forth on GPUs so many times in the last 10 years that I've lost count. They'll use whoever can supply the best product in the volume they need, and things like OpenCL are their insurance in order to be able to stay nimble.
 
Last edited:

akugami

Diamond Member
Feb 14, 2005
6,210
2,552
136
What features would those be?

The 6800 series promised support by Purevideo for features such as WMV acceleration that was not, and AFAIK still not, accelerated by early 6800 series cards. Never mind that the drivers to enable Purevideo was 8 months late.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
My point being that my impression of things is that AMD has deprioritized GPGPU at the moment. It's definitely lazy and in the long-run it may prove to be a bad idea, but short-term the company doesn't have anything to gain when they're already selling chips as fast as they can make them. Thus my impression is that the company is effectively deciding to sit out the GPGPU market for a year (or more) and wait to see how things go. Certainly it's still small enough that there's still time to jump in later.

I think that's being unfair to their customers.
They lured their customers into buying Radeons, claiming that they'd support OpenCL, and come up with physics acceleration in games and whatnot, and there is absolutely NO effort from AMD to deliver this to people who already trusted them with their money.

If I had known beforehand, I would never have bought my Radeon 5770. I was developing with Cuda and OpenCL when my 8800GTS320 died. I figured I'd get a Radeon, and concentrate only on OpenCL, and get DX11/DirectCompute to boot... but it was all a big deception. Had I known beforehand, I would have just bought another DX10-class GeForce, with proper OpenCL.
Obviously I ordered the GTX460 as quickly as I could... away with that crappy Radeon 5770.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
nVidia worked closely with Microsoft to develop HLSL for DirectX, and made their own Cg language based on that, which was compatible with both D3D and OpenGL (long before OpenGL standardized GLSL).
They have a lot of experience in developing languages, compilers etc (and the management that goes with such projects).

You seems to forgot that the CG language was created to optimize their hardware performance. CG language was introduced in 2003 and it was the only way to make the GeForce FX run decently, lowering precision, using short shaders, reordering the math usage or their shaders etc.

DirecX HLSL is a cooperative work from nVidia, Microsoft and ATi, not nVidia alone. I always doubted of their compiler performance (GeForce FX anyone), they might be good for GPGPU performance, but I doubt that the will go totally superscalar in a new architecture due to that. (GF104 is a hybrid)

What features would those be?

Their missing MUL in the 8800/9800 series, their questionable VC-1 hardware acceleration on the 8800/9800/GTS250 (Which was none), half implemented acceleration (GT200 series) and AMD enjoyed for more than 3 years. Their commitment to adopt the latest standards with decent performance (Their DX10.1 adoption is far from spectacular).

AMD is no saint either, I wait for too long for GPU accelerated AVIVO and still waiting for phisics which were promised and tested way back in the X1K era.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
You seems to forgot that the CG language was created to optimize their hardware performance. CG language was introduced in 2003 and it was the only way to make the GeForce FX run decently, lowering precision, using short shaders, reordering the math usage or their shaders etc.

Nonsense. You can use half precision in regular HLSL as well. And you can use full precision on Cg.

DirecX HLSL is a cooperative work from nVidia, Microsoft and ATi, not nVidia alone.

Microsoft and nVidia, as I said. Not ATi.
http://en.wikipedia.org/wiki/Cg_(programming_language)
Cg or C for Graphics is a high-level shading language developed by Nvidia in close collaboration with Microsoft[1][2] for programming vertex and pixel shaders. It is very similar to Microsoft's HLSL.
http://en.wikipedia.org/wiki/HLSL
The High Level Shader Language or High Level Shading Language (HLSL) is a proprietary shading language developed by Microsoft for use with the Microsoft Direct3D API. It is analogous to the GLSL shading language used with the OpenGL standard. It is the same as the NVIDIA Cg shading language, as it was developed alongside it.[1]

ATi just sat on the sideline as usual, letting Microsoft and nVidia do the hard work.

I always doubted of their compiler performance (GeForce FX anyone), they might be good for GPGPU performance, but I doubt that the will go totally superscalar in a new architecture due to that. (GF104 is a hybrid)

Their missing MUL in the 8800/9800 series

Missing MUL?
What?

their questionable VC-1 hardware acceleration on the 8800/9800/GTS250 (Which was none)

Did they advertise it as a feature? Nope.
Besides, it was only G80, not G92.

Their commitment to adopt the latest standards with decent performance (Their DX10.1 adoption is far from spectacular).

Did they ever advertise DX10.1 support? Nope, on the contrary.

We're talking about things they ADVERTISED, but did not deliver. Not things they didn't deliver, but never claimed to support anyway. Try to keep up, AMD fan.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Nonsense. You can use half precision in regular HLSL as well. And you can use full precision on Cg.

No AMD hardware allows half precision in regular HLSL, strike 1

ATi just sat on the sideline as usual, letting Microsoft and nVidia do the hard work.

Another troll post with no evidence, that's your point of view, thanks to AMD's involvement in DX11, Tessellation, BC4, BC6 and the Gather4 are now a reality, what nVidia implemented in DX11?? Strike 2.

Missing MUL?
What?

That doesn't mean that because you don't know a squat about it, you can dismiss it.\

http://www.beyond3d.com/content/reviews/1/11

"NVIDIA's documentation for G80 states that each SP is able to dual-issue a scalar MADD and MUL instruction per cycle, and retire the results from each once per cycle, for the completing instruction coming out of the end. The thing is, we couldn't find the MUL, and we know another Belgian graphics analyst that's having the same problem. No matter the dependant instruction window in the shader, the peak -- and publically quoted by NVIDIA at Editor's Day -- MUL issue rate never appears during general shading.

We can push almost every other instruction through the hardware at close to peak rates, with minor bubbles or inefficiencies here and there, but dual issuing that MUL is proving difficult. It turns out that the MUL isn't part of the SP ALU, rather it's serial to the interpolator/SF hardware and comes after it when executing, leaving it (currently) for attribute interpolation and perspective correction. Since RCP was a free op on G7x, you got 1/w for nothing on that architecture for helping setup texture coordinates while shading. It's not free any more, so it's calculated at the beginning of every shader on G80 and stored in a register instead. It's not outwith the bounds of possibility that the MUL will be exposed for general shading cases in future driver revisions, depending on instruction mix and particular shaders."

Strike 3 and you're out!!

Try to keep up, AMD fan.

Yeah, nScali, I will keep up.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
No AMD hardware allows half precision in regular HLSL, strike 1

So? HLSL has the 'half' keyword: http://msdn.microsoft.com/en-us/library/bb509646(VS.85).aspx
Just because AMD doesn't honour that keyword doesn't mean that HLSL doesn't support it.
FAIL.

Another troll post with no evidence, that's your point of view, thanks to AMD's involvement in DX11, Tessellation, BC4, BC6 and the Gather4 are now a reality, what nVidia implemented in DX11?? Strike 2.

What does DX11 have to do with anything? HLSL was developed for DX8. ATi wasn't involved there.
FAIL.


That doesn't mean that because you don't know a squat about it, you can dismiss it.\

Lol yea, Beyond3D... ATi shill site #1 on the web. I'm not even going to waste time debunking their lies.
You fail.
 
Last edited:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
So? HLSL has the 'half' keyword.
Just because AMD doesn't honour that keyword doesn't mean that HLSL doesn't support it.
FAIL.

They prefer to have full precision all the time for the sake of image quality.

What does DX11 have to do with anything? HLSL was developed for DX8. ATi wasn't involved there.
FAIL.

LOL, what part of DX8.1 Shader Model 1.4 you can't understand? Is was made specifically for ATi hardware, no nVidia hardware supported that and improved the performance greatly, so your accusation of AMD's lack of involvement in the DX8 spec is a blatant lie.

Lol yea, Beyond3D... ATi shill site #1 on the web. I'm not even going to waste time debunking their lies.
You fail.

Well, it its lies, try to prove me wrong, since you have no proof, then sir you are a troll.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
They prefer to have full precision all the time for the sake of image quality.

So does all GeForce hardware, except for the GeForce FX series. But that was not the point.
You said:
You seems to forgot that the CG language was created to optimize their hardware performance. CG language was introduced in 2003 and it was the only way to make the GeForce FX run decently, lowering precision, using short shaders, reordering the math usage or their shaders etc.

To which I responded with:
Nonsense. You can use half precision in regular HLSL as well. And you can use full precision on Cg.

The 'half' keyword in HLSL was used to optimize for GeForce FX, by lowering precision. There is no need to use Cg for that. Aside from that, Cg was not developed for GeForce FX, but for GeForce 3, years before. I win, you lose.

LOL, what part of DX8.1 Shader Model 1.4 you can't understand? Is was made specifically for ATi hardware, no nVidia hardware supported that and improved the performance greatly, so your accusation of AMD's lack of involvement in the DX8 spec is a blatant lie.

HLSL already existed by that time, it was developed for DX8, not DX8.1.
HLSL was a cooperation between Microsoft and nVidia, period. I win, you lose.

Well, it its lies, try to prove me wrong, since you have no proof, then sir you are a troll.

Sounds like you're the one doing the trolling here. You've posted nothing but nonsense regarding HLSL and Cg.
 
Last edited: