i was thinking it wasn't very parallel since it had low performance and since it had 64 cores but i dont know for sure so that was why i was asking. i was guessing the GK110's CUDA architecture is actually more parallel and has some useful features (particularly bandwidth saving and efficiency techniques like being able to read from the hard drive) vs. larrabee although i am sure the former isn't radically parallel either.
asking because everyone here knows i am a fan of programmable color/depth (or programmable whatever) and that it could be as fast anyone wanted it too or look as good as anyone wanted it to rather than being a compromise that no one is as satisfied with as they would be with other things (hardware is only as scalable as the hardware allows). the only things that i wouldnt discard immediately are all of the features in current display logic and maybe something that would make multiple dies play without any known issues (nvidia has hardware that reduces microstutter with SLI for example). of course, software solutions dont have to replace specialized hardware, but we likely only have two discrete GPU makers because of IP and so there isnt much need for them to take any risks although one of them does some new things sometimes but then charges above market prices for them, keeps their drivers mostly closed source, and then they arent really happy with the money they make anyway... this stuff could be made out of a few garages and it would be a lot cheaper and better.
anyway, i remember how sufficient doing most things in software was back in the day (256 color VGA graphics was hardware but that just rendered the pixels and did a few other minor things and the sound hardware did a lot too, but systems back then didnt use multiple CPUs), so i dont see how adding another general purpose die or two with an ISA optimized for graphics wouldnt be just as good especially if there were no IP. i am just not convinced that hardware is able to be objectively measured as better than software... i dont even know that we'd have had 3d accelerators if intel had had competition or if had kept up the FPU performance and instruction sets appropriate for each application... simply offered chipsets with multiple general purpose die sockets (slots at the time the first 3d accelerators were released if i am not wrong).
asking because everyone here knows i am a fan of programmable color/depth (or programmable whatever) and that it could be as fast anyone wanted it too or look as good as anyone wanted it to rather than being a compromise that no one is as satisfied with as they would be with other things (hardware is only as scalable as the hardware allows). the only things that i wouldnt discard immediately are all of the features in current display logic and maybe something that would make multiple dies play without any known issues (nvidia has hardware that reduces microstutter with SLI for example). of course, software solutions dont have to replace specialized hardware, but we likely only have two discrete GPU makers because of IP and so there isnt much need for them to take any risks although one of them does some new things sometimes but then charges above market prices for them, keeps their drivers mostly closed source, and then they arent really happy with the money they make anyway... this stuff could be made out of a few garages and it would be a lot cheaper and better.
anyway, i remember how sufficient doing most things in software was back in the day (256 color VGA graphics was hardware but that just rendered the pixels and did a few other minor things and the sound hardware did a lot too, but systems back then didnt use multiple CPUs), so i dont see how adding another general purpose die or two with an ISA optimized for graphics wouldnt be just as good especially if there were no IP. i am just not convinced that hardware is able to be objectively measured as better than software... i dont even know that we'd have had 3d accelerators if intel had had competition or if had kept up the FPU performance and instruction sets appropriate for each application... simply offered chipsets with multiple general purpose die sockets (slots at the time the first 3d accelerators were released if i am not wrong).