Stuff like Dell will help a little bit.
But the only people who have the ability to put pressure on ATI and Nvidia are going to be users.
One thing to keep in mind also is that the 'users' that ATI and Nvidia care most about is Unix Grahical Workstations. Linux has effectively replaced SGI a while ago and is currently the most popular OS for high end video and 3d graphics stuff. I am talking about Hollywood studios and scientific workstations. These are people that spend a LOT of money on hardware.
If it wasn't for this market then there would be no drivers coming out of Nvidia or ATI, proprietary or not. Consumer cards are just a afterthought, with Nvidia being a bit better because I think they have a more standardized hardware interface for producing 'unified drivers'.
In the future I see things changing. The 'hardware acceleration' provided by video hardware.. well isn't. Notice the biggest change to come to graphics are things like programmable shader languages and thing like that. You program using a special language, compile it using a GLSL compiler and then run it on the GPU.
Same thing with accelerated 3D graphics. Your not dealing with 'OpenGL expressed in hardware' or anything like that.. The OpenGL protocol stacks are now just very optimized software and compiled to run on both the GPU and the CPU, with the GPU geared towards specific types of workloads the CPU is geared towards generic workloads.
There is a QT hacker, Zack Rusin, that has a little side project were he is working on taking LLVM (which is a compiler/vm language suite based around GCC) and MESA and making it so that you can compile software to directly run on the GPU. You can do shading languages in Python, Ruby, C or C++ if you want. OS X and Apple use LLVM for doing 3D graphic stuff somewhat, but it's not hardware accelerated.
The way things are going is that the GPU is going to be used for more and more generic stuff. It's going to be another CPU core you can take advantage of to run your software. The CPU-GPU integration. GPGPU or Fusion or whatever.
In a couple years your going to see 16-way and 32-way proccessors. Right now Intel has shown 80-core processors for demo'ng/testing reasons. And CPU manufacturers are always working on cpu designes 2 generations out.
Having 2-way cpus is very good for the desktop. Having 4-way would be nice also. But I figure once you get up to 16-way or 32-way cpus then there is virtually no benefit for desktop performance over 4-way cpus.
So in order to get customers wanting to buy newer processors your probably going to see different types of cpu cores that are tuned for specific workloads. The most obvious optimized core would be very GPU-like for processing graphics and media encoding/decoding which are currently the most CPU-intensive stuff being done today.
Hence you have Larrabee.
http://arstechnica.com/news.ar...ans-with-larrabee.html
It's Intel's stab at the discrete video card market. Their attempt to go head to head with ATI/AMD and Nvidia.
Their stuff is suppose to be very 'x86-like'. Were your going to effectively have 16-way processors specificly geared for graphical workloads in those video cards.
So this _could_ mean that Linux users working on a Intel-based workstation with Intel graphics would have a substantial performance advantage over running ATI or Nvidia graphics. Not so much for pure 3D gaming performance, but for doing all sorts of stuff like media encoding/decoding, rendering, and such. You'd compile software optimized to run on BOTH the CPU and the GPU. Except for the normal threading issues with multiple-core processors the compiler and the software should be able to use either the GPU's or the CPU's based on what can accelerate the software the best. All of this automaticly and without a huge amount of effort on the part of the programmer, besides the normal multi-threading issues.
This is very similar to IBM Cell.. which currently Linux is the only OS able to support that properly. It could be that Intel wants to support Linux well because when these new architectures start coming out Linux will be the only system able to effectively support them at first, to the fullest advantage. Similarly it took Microsoft years to produce a workstation OS that ran well on AMD64 vs Linux which supported that CPU since day one.
If all of my speculation is true then being open has very significant advantages over being closed.
Could you imagine having a processor were the maker absolutely refuses to tell you how to program for it? That is the current situation with Nvidia and ATI...