When Intel has 16 core Ivy Bridge...

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
...wouldn't it be wiser to replace the iGPU (with the exception of the removal of the TMDS) with TA, and TF units w/ level 2 cache and just develop a renderer that will use the texture units for textures and the CPU cores for shaders and to emulate the backbuffer?

It could use, say, 4 cores for traditional CPU functions and the other 12 for blending, shading, and depth when running a game.

Wouldn't that be a good idea? I've heard that programmable backbuffers don't always perform worse than HW back buffers.

I think it would get them a lot farther than the iGPU.
 

Dman8777

Senior member
Mar 28, 2011
426
8
81
There's nothing like armchair-engineering in computer-geek forums when you're looking for a good laugh.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
...wouldn't it be wiser to replace the iGPU (with the exception of the removal of the TMDS) with TA, and TF units w/ level 2 cache and just develop a renderer that will use the texture units for textures and the CPU cores for shaders and to emulate the backbuffer?

It could use, say, 4 cores for traditional CPU functions and the other 12 for blending, shading, and depth when running a game.

Wouldn't that be a good idea? I've heard that programmable backbuffers don't always perform worse than HW back buffers.

I think it would get them a lot farther than the iGPU.

CPU cores, even multiple CPU cores, are not very friendly to massively parallel computing. GPUs use a lot of parallel paths to stream and render.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
CPU cores, even multiple CPU cores, are not very friendly to massively parallel computing. GPUs use a lot of parallel paths to stream and render.

16 ivy bridge cores would probably outperform quite a few GPUs if devoted to rendering. With AVX, they can do 8 simultaneous 32-bit ops per core, so you have the equivalent of "128 shader cores" if you're counting shader cores the way AMD does. Then you're running at roughly 3ghz, while AMD's gpus are running at roughly 500mhz, so you have 6x the clock speed. That would probably perform a bit under the performance of a Radeon 5750, at least based on GFLOPs, which is a roughly $100 card. Still, faster than any fusion gpu, if you dedicate the entirety of a 16 core Ivy bridge to rendering.


I'm not sure if it's still the case, but earlier Intel gpus off-loaded a large percentage of the rendering to the CPU (all the vertex shading). This was ok on a cpu with good floating point performance (ie, Pentium 4), but performance was even more horrid of a cpu without good floating point performance (ie, Pentium M). Of course, even Atom has reasonably good floating point performance these days (really good, considering how poor int performance is on Atom, it's basically the inverted Pentium M).
 

BD231

Lifer
Feb 26, 2001
10,568
138
106
...wouldn't it be wiser to replace the iGPU (with the exception of the removal of the TMDS) with TA, and TF units w/ level 2 cache and just develop a renderer that will use the texture units for textures and the CPU cores for shaders and to emulate the backbuffer?

It could use, say, 4 cores for traditional CPU functions and the other 12 for blending, shading, and depth when running a game.

Wouldn't that be a good idea? I've heard that programmable backbuffers don't always perform worse than HW back buffers.

I think it would get them a lot farther than the iGPU.

Task limited cores end up going to waste in *WAY* to many real world applications, Intel would never do that to themselves.
 

greenhawk

Platinum Member
Feb 23, 2011
2,007
1
71
the only 16 "core" ivy bridge will be the IB-E version, and that is only if you count hyperthreading for half of those cores.

As to iGPU, the "E" versions, as seen with the SB-E, do not have a iGPU at all.

As to adding in a render, that is unlikly for a few reasons. Dedicated specilized hardware has no place in current hardware and the iGPU for SB is poorly utilized even now, comming up to a year out of the gate (and ignoring how long software developers had before SB's release).

For "allocating" hardware cores to tasks, at a level needed to be useful, all OS's will need to be re-written to support odd cores (ie: asymetric cores). Not looking to happen in the next 3 years when, IIRC, Ivy Bridge will have been phased out.

I think it would get them a lot farther than the iGPU.

Given the market wanting a iGPU and ones wanting a strong rendering cpu, I think intel knows which market is bigger and worth catering to. Besides, all the MMX and SSE instructions where designed to address the non-parelle processing of a CPU to start with. Coding for a GPU by the software developers gives far better returns (effort vs cost/time).
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,004
126
Integrated solutions will always be crippled by system RAM, no matter how good their core is.