I'm no engineer so I've no real idea just how difficult this process may be. As many have suggested, it seems unlikely that the transition will be instantaneous. Engineering revolutions happen, they just don't become profitable and make it to market overnight. I can't say how much preparation has been done in this area by any of the companies involved, so I've no real idea of how soon we are to seeing such solutions.
If pressed for a guess, however, I'd say it will be a long time before we see the technology in the mainstream market. I can easily envision very low-end, integrated solutions for applications that require little memory bandwidth (designed for the corporate and mobile, non-gaming markets). I can also possibly envision very expensive solutions, with motherboards that have integrated video memory and a socket for the GPU. We're already buying $250 motherboards and $600 GPUs, for some folks, the added cost of producing a really complex motherboard with integrated memory isn't going to be an issue.
The real problem in the high-end that I can see, is the added cost of switching between CPU vendors that will be imposed by having such a GPU/CPU socket combination directly on the motherboard. Short term, I think this is probably cost-prohibitive for many, nor is it ideal for the consumers to be locked into a certain GPU/CPU combination. Long term, with the switch to multi-core, we'll probably be entering an entirely new market of integrated CPU/GPU chips. When new chips come out, we'll be reading both CPU and GPU benchmarks on the same chip. I think that's pretty far in the future, however, at least 5+ years, but I'm no engineer.
In the meantime, it's hard to imagine too many scenarios on the high-end that make good sense. Not only would a GPU socket lock-in the customer to both a particular GPU and CPU vendor, but committing to a particular socket interface imposes added design limitations. Both nVidia and ATI(AMD) will have to switch how they do their GPU development quite drastically, it seems to me. I think they'll both do it at some point, but I'm not sure how soon it will actually happen.
To sum up: to me, low end solutions seem likely in the near term, with the added slight possibility of some very high-end options at some point.
On some level, just off the cuff, I wish Intel would adopt the hyper-transport system and the co-processor architecture AMD is coming up with. The reason I say this is that, long term, I don't want to see nVidia pushed out, or the possibility of exclusively GPU vendors pushed out. I'd like the industry to keep the option of having a drop-in, third-party co-processor. The best thing for the consumer would be for both major CPU vendors to adopt the same standard for co-processors, just as we have used the AGP and PCIe interfaces on the motherboard. Again, as I'm not an engineer, so I don't even remotely know if this is possible. I certainly doubt it is likely.
Once the memory situation is resolved, however, I don't want to see the industry limited to just what Intel and AMD can offer in the way of chips that perform both CPU and GPU functions. I suppose that isn't much different than the situation was for several years, with nVidia and ATI the only real GPU options, or Intel and AMD as the only real CPU options.
I'm a fan of open industry standards. The great thing about AGP or PCIe, or any standardized interface like IDE, SATA, PCI, etc., is that it provides a way for the consumer to access the best work, regardless of vendor. As we move to integrated GPU solutions, I want to see as much consumer flexibility preserved as possible.
If pressed for a guess, however, I'd say it will be a long time before we see the technology in the mainstream market. I can easily envision very low-end, integrated solutions for applications that require little memory bandwidth (designed for the corporate and mobile, non-gaming markets). I can also possibly envision very expensive solutions, with motherboards that have integrated video memory and a socket for the GPU. We're already buying $250 motherboards and $600 GPUs, for some folks, the added cost of producing a really complex motherboard with integrated memory isn't going to be an issue.
The real problem in the high-end that I can see, is the added cost of switching between CPU vendors that will be imposed by having such a GPU/CPU socket combination directly on the motherboard. Short term, I think this is probably cost-prohibitive for many, nor is it ideal for the consumers to be locked into a certain GPU/CPU combination. Long term, with the switch to multi-core, we'll probably be entering an entirely new market of integrated CPU/GPU chips. When new chips come out, we'll be reading both CPU and GPU benchmarks on the same chip. I think that's pretty far in the future, however, at least 5+ years, but I'm no engineer.
In the meantime, it's hard to imagine too many scenarios on the high-end that make good sense. Not only would a GPU socket lock-in the customer to both a particular GPU and CPU vendor, but committing to a particular socket interface imposes added design limitations. Both nVidia and ATI(AMD) will have to switch how they do their GPU development quite drastically, it seems to me. I think they'll both do it at some point, but I'm not sure how soon it will actually happen.
To sum up: to me, low end solutions seem likely in the near term, with the added slight possibility of some very high-end options at some point.
On some level, just off the cuff, I wish Intel would adopt the hyper-transport system and the co-processor architecture AMD is coming up with. The reason I say this is that, long term, I don't want to see nVidia pushed out, or the possibility of exclusively GPU vendors pushed out. I'd like the industry to keep the option of having a drop-in, third-party co-processor. The best thing for the consumer would be for both major CPU vendors to adopt the same standard for co-processors, just as we have used the AGP and PCIe interfaces on the motherboard. Again, as I'm not an engineer, so I don't even remotely know if this is possible. I certainly doubt it is likely.
Once the memory situation is resolved, however, I don't want to see the industry limited to just what Intel and AMD can offer in the way of chips that perform both CPU and GPU functions. I suppose that isn't much different than the situation was for several years, with nVidia and ATI the only real GPU options, or Intel and AMD as the only real CPU options.
I'm a fan of open industry standards. The great thing about AGP or PCIe, or any standardized interface like IDE, SATA, PCI, etc., is that it provides a way for the consumer to access the best work, regardless of vendor. As we move to integrated GPU solutions, I want to see as much consumer flexibility preserved as possible.