Originally posted by: Qbah
Again, the difference being that a 7-series could run the same thing as a X19xx series, more or less at the same speed. Both camps ran DX9 code fine. Because DX9 was and still is an industry standard.
Originally posted by: evolucion8
At the beginning even the GeForce 7 series were slighly faster overall, but when the developers pushed the DX9 envelope with lots of shaders and graphical effects, the GeForce 7 series performed much slower than the X19X0 series because of their inneficient and slow SM3.0 branch code performance, and the lack of FSAA when HDR was used.
Probably correct here. X1900 series were excellent GPU's.
Originally posted by: Scali
I think people are way overreacting to this lock-in thing. We were 'locked-in' to nVidia for a LONG time with DX10 aswell, because ATi not only had a huge delay in introducing their first DX10 cards (2900 series), they were also such poor performers that they weren't really an option to any informed buyer.
Originally posted by: evolucion8
They weren't really poor performers, they just weren't fast enough to outperform the high end 8800GTX, it was able to keep up with the GTS 640 in most scenarios as far as you don't turn anti aliasing on. Pretty much like the HD 3870 is currently doing, offering you great performance at high quality in most games at midrange resolutions.
Nah, they were a huge let-down. 2900 especially. the 3xxx series offered only a small bit of relief in power consumption (which was off the charts in the 2xxx), heat, price, but offered no relief in AA performance. ATI sucked wind since X1900 series. Made an incredible comeback with 4xxx series. Kudos to ATI for that.
Originally posted by: Keysplayr
"I can also see reasons why nVidia's architecture would run better, as OpenCL closely matches Cuda's design, and Cuda's design is based around the nVidia architecture. ATi has a completely different architecture, and has had to add local memory to the 4000-series just to get the featureset right for OpenCL. I doubt that their 'afterthought' design is anywhere near as efficient as nVidia's is."
Originally posted by: evolucion8
That local memory design is also available in the GTX series, I don't think that you have expertise enough or worked in ATi to know if it was an afterthought, that's just an opinion of your own and not an absolute truth.
No, it was a knee-jerk afterthought. Dude, anyone could see this. ATI knew they had a GPGPU, but in no way did they even approach the level of thought and R&D that Nvidia had done. ATI was only interested in making a gaming GPU with the side benefit of being able to run a few apps as a GPGPU to make the villagers happy. Nvidia took that to the extreme and went full bore with their GPGPU architecture. It's painfully apparent that this it true. You don't need to be a rocket scientist to observe and deduct what has happened over the last two years. And that local memory you speak of, has been present in Nvidia GPU's since G80. Right from the start. 2xxx and 3xxx did not have this. Afterthought.
Originally posted by: Genx87
Once Intel gets larry up and running they will do the same on the GPU front.
Exactly. It's just going to take a while longer. An adjournment for ATI.
Originally posted by: evolucion8
For anti trust reasons, AMD can't go away even if Intel wishes it, but I find very doubtful that a very programable pipeline can outperform the much faster fixed function which is heavily parallel like the ones found on nVidia or ATi.
Agreed, but we'll have to wait and see. After all, they are Intel. Never write them off.
Originally posted by: Scali
Yea, nVidia probably knew that too, being as inviting as they were. As if they were trying to lure ATi into the trap.
You can see it now with Folding@Home. ATi has been at it for years... nVidia recently made their first Cuda-based client for Folding@Home, and it completely blows ATi away. It will be interesting to see what happens when the first OpenCL software emerges.
And how ironic it would be if Havok would run better on nVidia GPUs than on ATi's.
It all comes down to who has the better architecture suited for this purpose. As it stands, and as demonstrated over and again with various applications, bench's and demos, that Nvidia has the better GPGPU architecture. I don't think this can be argued legitimately.
Originally posted by: evolucion8
Folding@Home runs slower because isn't using the the local data share in the HD 4x00 architecture, but when it's implemented, it should run as fast or even faster, look for example at Milky Way @ Home,
http://www.brightsideofnews.co...s-in-milkywayhome.aspx Runs much faster than it's CPU counterpart.
Quoted from Evolution8: "
I don't think that you have expertise enough or worked in ATi to know if it was an afterthought, that's just an opinion of your own and not an absolute truth."
Does this mean you have the expertise or worked at ATI to know that F@H would run as fast or faster on ATI hardware if the local data share in the HD 4x00 is used? We'll, start writing that code, because nobody else seems to want to. I'll believe it when I see it.
MW@H
We've been through this before. Of course MilkyWay@Home would run faster on an ATI GPU than a CPU. I don't even know why you're including this in our conversation here.
As we have stated before, there isn't an Nvidia CUDA client for MW@H for a direct GPGPU to GPGPU for comparison. Using your logic, probably an 8
600GT could wipe the floor against that CPU in your link to the MW@H bench, and perhaps take on the HD 4x00.
Originally posted by: evolucion8
Guys, farewell, I will be on military annual training for 15 days, so I won't be able to post from today until the end of may, have a good time and remember, competition is always good!!
Good Luck, be safe. See you in 16.