ATi's stream processors are setup in clusters of five, which means that each cluster if its properly fed, can process up to 5 instructions per clock in the best case scenario, in the worst case scenario only will process one instruction per cluster. But worst case scenarios are rare because of the type of work that is done at the rendering level.
You can compare that with the example of the HD 3870 taking aside Anti Aliasing performance, it has only 320 stream processors but in reality they are 64 SuperScalar processors. In the worst case scenario it would be twice slower than the 8800GTX and it was never that slow, it the best case scenario it would outperform it, something that only happened once in a lifetime. So it would means that the HD 3870 was running like a 96 stream processor design since it was close to the 9600GSO in terms of performance. The same can be said of the HD 4850 with its 160 superscalar processors which isn't much faster than the 128 stream processors of the GTS 250.
Considering that the HD 4870 has optimizations tricks that weren't present on the HD 3870, and are now absent with the HD 5870, assuming then that it will means that performance gains at the driver level will only work to maximize the execution engine utilization, and not maximizing efficiency code like it could be done before with the HD 4x00 series. So I doubt a miracle driver will boost the HD 5x00 series performance to outperform Fermi if Fermi results to be faster.
I agree with you 100 percent evolucion. I wasn't stating that ATi can somehow catch up if Fermi is indeed faster, but the excuses will be made as a marketing move. And while these tricks may be gone now, the efficiency is a problem. Isn't the goal of hardware design to allow a better abstraction? I think it's quite sad that Ati has to optimize much more frequently. So what if nVidia's cards win on 80 percent "useless" games, all I'm saying is that people should open their eyes and realize that the card isn't as general purpose.
I'm not using general purpose to defend stuff like folding @ home or any junk like that, but the future of parallelization encourages use of architectures like Nvidia. You can send jobs to any of the shaders since they're all equally as capable, which allows alot more work to get done. Imagine the overhead you would incur by continually having your superscalar shader delegating these instructions to yet another set of worker shaders... It's a bit messy.
People can buy ATi cards all they want, and I hope they do for the future of competition. What I want people to start demanding from ATi is an architecture that challenges the industry and encourages adoption. ATIs setup isn't doing that. nVidia's cards are getting adopted like crazy in our Department because for our super large input data-sets we have something that can help out. Even IF nVidia loses the benchmark war, there is no doubt a clear change in direction from 2 companies. One wants to sell video cards and optimizes for one area, and the other has the primary goal of selling video cards AND ushering in parallelization.
The choice is up to you, people. I know that the price is a big thing, and we will have to see what will become of the GTX 480.
I remember at one time ~10 years ago there existed a giant, I think its name was Intel. This giant was lazy, forgot its duty to the industry to push the boundaries even in face of bleak competition, and this giant fell asleep and got kicked in the nuts by a Rumplestiltskin-esque midget named AMD.
What happened then? Well after a few years of brooding it looked like the Monster woke up, and that giant has yet to relinquish its duties.
We need manufacturers to push themselves, and I would say nVidia hasn't worried about ATi's offering in quite a while. It seems like their goals are very similar, but nVidia wants the crown to push itself, and the industry. I think it's a very noble thing to do.
Every series of video card they've released since the 6xxx has proposed a new way to do things. Sure the shaders are relatively the same, but that doesn't matter. It's the vision.
I hope one day that ATi has this vision and markets that product in all areas of Computing Science, just as nVidia has. I know I will be happy to see that day. Right now I see one company pushing itself and getting flamed for being late on, what could be, a revolutionary video card.
*shrug*