Window seven to support graphics processing by CPU

Comdrpopnfresh

Golden Member
Jul 25, 2006
1,202
2
81
http://www.theinquirer.net/gb/...ft-graphics-cpu-warped

Supposedly.... a penryn quad gets better frames in crysis using WARP10 than an intel igp. Though I did not see a clock speed on the quad, nor a model of intel igp in the inq article. Plus that is far from playable... but still impressive. It seems to be some efficient code, in terms of getting performance from the cpu. I'd like to know what share of resources and utilization is the price to be paid though. Wonder how this'd run on a nehalem or amd chip- with faster access to memory than a core2....
 

palladium

Senior member
Dec 24, 2007
539
2
81
Wonder how this'd run on a nehalem or amd chip- with faster access to memory than a core2....

I'm sure any Nehalem system ( or an AMD system for that matter) would be equipped with a better GPU than an Intel IGP.

With Apple's OSX Snow-Leopard purported to provide both GPU acceleration of normally CPU-bound tasks, and an interface that properly utilises the GPU's capabilities, Microsoft's approach seems to be a backwards step compared to the current trend within the industry.

+1
 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
It must be nice to have powerful friends when you only have a piffling 80 per cent market share.

Ah, yes, back to the good ol' WIntel days...
 

Comdrpopnfresh

Golden Member
Jul 25, 2006
1,202
2
81
I wondered about WARP10 use on nehalem or an AMD chip because the performance comparison was of an entry-level igp to apenryn quad-core cpu. I'd suspect that having a memory controller on-die would allow for faster access to memory by the graphics instructions- which would be important for textures and whatnot.

I don't see why this is being viewed as an either/or type of development. Granted, I did not read the original document, but why not put some cpu horsepower towards graphics? Everyone is complaining about how mutli-threaded games aren't gushing out from developers and how all the cores are just sitting there underutilized. Why not give a bit of cpu-time towards boosting graphics further? Using the cpu to lend a helping hand to a gpu could be even more useful if future chips are cpu-gpu dies, because the instructions, cache, and memory addresses are all housed on the same chip.

How can the coming apple os use gpu processing for os-managed cpu tasks without choosing between the competing formats, like cuda, ctm, and that modified C language? I'm not very familiar with apple/macs, but aren't a narrow selection of graphics cards compatible? If so, an os developer with a small market share can write their codes to maximize hardware utilization much easier than an os which is responsible for broad compatibility.

You can't be vamp or goth- you gotta choose one!

I completely disregarded the slant of apple-V-microsoft the inq. put in- it's nearly expected of them. The documents they reported on gave data about a microsoft development, who cares how it stacks up against 'purported' apple developments?
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
I remember the graphics core on the original version of Asheron's Call was purely software, and on a Pentium 100, it rivaled competing hardware accelerators at the time. Then again, so does UT using the software render, compared to an S3 Virge, on a PIII-1Ghz.
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
Originally posted by: Comdrpopnfresh
http://www.theinquirer.net/gb/...ft-graphics-cpu-warped

Supposedly.... a penryn quad gets better frames in crysis using WARP10 than an intel igp. Though I did not see a clock speed on the quad, nor a model of intel igp in the inq article. Plus that is far from playable... but still impressive. It seems to be some efficient code, in terms of getting performance from the cpu. I'd like to know what share of resources and utilization is the price to be paid though. Wonder how this'd run on a nehalem or amd chip- with faster access to memory than a core2....

The whole point of this is not to play games without a GPU or do things a GPU normally would with just a CPU. The benefit of this is for developers and programmers for video card drivers to have a nice point of reference to determine if their code is rendering properly. Instead of creating a game engine and having to test on multiple systems with different graphics hardware, they can simply fire it up in software only and verify that it functions properly.