Actually, I would rather AMD NOT follow Nvidia's example and try to make a video card into something it's not. The fanboys can spew architecture BS all they want, but in the end you have a NV gpu that's a lot hotter, bigger, more expensive, and half a year late, with only a small performance advantage to show for it. As a consumer, it's exactly the opposite of what I'm looking for in a video card.
Yea, that's easy to say now.
But allow me to point out that the GeForce 8800 was the exact same story.
That's where Cuda started. An original GeForce 8800 can run DirectCompute, OpenCL and PhysX, aside of standard D3D and OpenGL workloads.
Compare that to the Radeon HD2900... It was half a year late, it was expensive, hot, slow, and it did NOT offer the extras.
The 3000-series wasn't capable of OpenCL/DirectCompute either.
4000-series can do OpenCL/DirectCompute, but performance is quite poor compared to its nVidia counterparts.
Only the 5000-series is really a viable option... but pretty much all the software was written for Cuda so far.
So it's not like nVidia's strategy is a recipe for disaster. On the contrary. Their previous attempt was a resounding success. Not only in graphics, but it also laid the groundwork for today's GPGPU frameworks and applications.
It took AMD 3 generations to catch up.
So, Fermi might not be as successful as the 8800 series was, but nVidia is quickly turning things around with the GF104.
I most certainly do not hope that they change their strategy. It's great that at least one company is still out there pushing boundaries and coming up with new ways of doing things (Cuda on a Fermi is now WAY ahead of anything OpenCL or DirectCompute, and is going to be even more of a threat to x86).