Converting a cpu to a gpu

Intelman07

Senior member
Jul 18, 2002
969
0
0
Would it be possible to convert a CPU chip to a GPU then you would have a core clock of 800Mhz or more. I know there is more to it than just sodering but would it be possible with the proper equipment and all?
 

jmitchell

Senior member
Oct 10, 2001
212
0
0
<sarcasm>would it be possible to convert a blender into a helicopter with a rocket launcher?</sarcasm>

I think the thought is good: why not harness the power of an already designed, very high speed chip, and use it solely for graphics processing... However, the functions required for graphics processing could only be implemented (mostly) through software, and could not possibly be as efficient as hardware implementation, even on a 2.5 ghz p4... The end result would not be worth all of the time and money spent overcoming the technical hurdles, and ultimately, to get maximum performance you would realize that the chip would need to be redesigned, thus making it a real gpu anyways... so, no.
 

sash1

Diamond Member
Jul 20, 2001
8,896
1
0
If it was that simple, it would have already been done. A GPU and a CPU are very different from eachother. Implementing the processing power of a CPU into a GPU would require a lot of money; and much more.

~Aunix
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
cpus have faster clocks... but consider the fact that GPUs modern GPUs do operations on multiple pixels at the same time. Current general-purpose CPUs aren't as fast at those tasks. you could probably build a 4-5 ghz x86 chip right now, but because of sacrifices you'd have to make (very little done per clock, etc) it would probably perform pretty poorly.
 

AdvancedRobotics

Senior member
Jul 30, 2002
324
0
0
One very different aspect between a CPU and GPU (that suprisingly has not been mentioned), is the fact that the CPU is a general-purpose component, whereas your graphics processor is dedicated hardware. You notice, if you try to run games on CPU power, they look awful. Reason: Your processor wasn't created to do the complex calculations at a high speed like your GPU is capable of. The architecture of these two components differs greatly. Calculating vertices, lighting, filtering, shading etc. is very stressful on the CPU. Image quality in software rendering is basically disregarded. Whereas, your GPU can do these calculations much faster.
 

draggoon01

Senior member
May 9, 2001
858
0
0
why doesn't intel try to start implementing some gpu features into their cpu's so that money which goes into upgrades for graphics/gaming goes to them instead of video card makers?
 

TerryMathews

Lifer
Oct 9, 1999
11,464
2
0
Because:
A) Intel's graphic chip designs are usually pretty bad. AFAIK, their current integrated graphic solution is based on the old i740, which was not top-of-the-line when it was released back in the heyday of the Voodoo2.

B) Considering Intel's past achievements at CPU design, <sarcasm> a combination GPU/P4 would prolly have a die the size of an Eggo waffle and run at 4.6GHz but yet still be slower than a 1.4GHz P3.</sarcasm>
 

Shalmanese

Platinum Member
Sep 29, 2000
2,157
0
0
A 1 Ghz Athlon is somewhere between a Voodoo 1 and Voodoo 2 in performance in Quake1/2.

Extrapolating, a current, top of the line P4 2.53 would be somewhere around a TNT2-M64 in performance.
 

aswedc

Diamond Member
Oct 25, 2000
3,543
0
76
Originally posted by: TerryMathews
Because:
A) Intel's graphic chip designs are usually pretty bad. AFAIK, their current integrated graphic solution is based on the old i740, which was not top-of-the-line when it was released back in the heyday of the Voodoo2.

B) Considering Intel's past achievements at CPU design, <sarcasm> a combination GPU/P4 would prolly have a die the size of an Eggo waffle and run at 4.6GHz but yet still be slower than a 1.4GHz P3.</sarcasm>

Nope you're thinking of the 810/815 series, the 845G for P4 is new and is about the same speed as a Geforce 2 MX. Intel is going in the direction of intergrating a graphics core very closely with other core system components, see their recent licensing of PowerVR's embedded MBX.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
It could be done if designers wanted to do so but there would be absolutely no point in doing so. The whole reason that GPU's were made in the first place is because CPU's weren't good enough. A GPU has the mathematical formulae and algorithmic stages of rendering hardwired into it. A CPU on the other hand has gerneral purpose binary logic, arithmatic, and control flow hard wired into it.
 
Jun 18, 2000
11,191
765
126
Originally posted by: Asuka
Image quality in software rendering is basically disregarded. Whereas, your GPU can do these calculations much faster.
This comment strikes me as odd. Why would software rendering disregard image quality? CGI that you see in movies and tv shows are done in software using server farms of hundreds or thousands of CPU's in parallel. You couldn't get that level of realism and quality on a GPU, even with the R300 and NV30 - you could get close, but not equal.

A CPU can do everything a GPU can, albeit slowly, but modern GPU's can't do everything a CPU can.
 

Ben50

Senior member
Apr 29, 2001
421
0
0
The main difference between a cpu and a gpu is that the gpu has many more parallel execution units. Many of the instructions are performed over and over again on each pixel, so it is much more efficient to do them all at once and use a lower overall clockspeed. Regular program code is not nearly as uniform, so a cpu is designed to go through each instruction more quickly because it can't execute a large number of instructions at once.
 

bizmark

Banned
Feb 4, 2002
2,311
0
0
Originally posted by: KnightBreed
Originally posted by: Asuka
Image quality in software rendering is basically disregarded. Whereas, your GPU can do these calculations much faster.
This comment strikes me as odd. Why would software rendering disregard image quality? CGI that you see in movies and tv shows are done in software using server farms of hundreds or thousands of CPU's in parallel. You couldn't get that level of realism and quality on a GPU, even with the R300 and NV30 - you could get close, but not equal.

A CPU can do everything a GPU can, albeit slowly, but modern GPU's can't do everything a CPU can.

Yes, but here we're talking about real-time rendering for gaming and such. Give a CPU as much time as it needs to render a 3D scene, and it'll be as complex and as beautiful as you can imagine. But make a CPU, even a fast one, render 3D in real-time, and it'll look like sh|t because it can only get so much done per unit time.
 

draggoon01

Senior member
May 9, 2001
858
0
0
It could be done if designers wanted to do so but there would be absolutely no point in doing so. The whole reason that GPU's were made in the first place is because CPU's weren't good enough. A GPU has the mathematical formulae and algorithmic stages of rendering hardwired into it. A CPU on the other hand has gerneral purpose binary logic, arithmatic, and control flow hard wired into it.

but wouldn't intel have reason to, since people would spend money on cpu's to upgrade gaming performance instead of video cards? i mean 3d seems one of the biggest reason for people to upgrade; microsoft is making their ui 3d even (imagine the masses upgrading for a better looking os, just like many gamers do. msft gets a few quick upgrade cycles that way). also if things were done in software doesn't that provide the best compatibility as well as easiest scaling for different speed cpu's/systems?

after reading this article, just seemed like something intel would want to do link
 

Ben50

Senior member
Apr 29, 2001
421
0
0
Intel makes chipsets with integrated graphics. The command a large percent of that market. They could make a gpu if they wanted to, but so far they haven't. I'm guessing that they don't want to compete in the very tough graphics market. Please remember that Nvidia, ATi, and others are extremely competitive so it would be a tough and expensive proposition for intel to come in and take over.
 
Jun 18, 2000
11,191
765
126
Originally posted by: bizmark
Originally posted by: KnightBreed
Originally posted by: Asuka
Image quality in software rendering is basically disregarded. Whereas, your GPU can do these calculations much faster.
This comment strikes me as odd. Why would software rendering disregard image quality? CGI that you see in movies and tv shows are done in software using server farms of hundreds or thousands of CPU's in parallel. You couldn't get that level of realism and quality on a GPU, even with the R300 and NV30 - you could get close, but not equal.

A CPU can do everything a GPU can, albeit slowly, but modern GPU's can't do everything a CPU can.

Yes, but here we're talking about real-time rendering for gaming and such. Give a CPU as much time as it needs to render a 3D scene, and it'll be as complex and as beautiful as you can imagine. But make a CPU, even a fast one, render 3D in real-time, and it'll look like sh|t because it can only get so much done per unit time.
Well, duh. :) My point was, Asuka's comment was a bit misleading in that it implied that software rendering had inherently flawed image quality compared to "hardware rendering."
 

TerryMathews

Lifer
Oct 9, 1999
11,464
2
0
Originally posted by: KnightBreed
Well, duh. :) My point was, Asuka's comment was a bit misleading in that it implied that software rendering had inherently flawed image quality compared to "hardware rendering."

Misleading: yes. Software rendering in real time has an inherently flawed image quality compared to hardware rendering in real time because no CPU can compete with the massive parallization of modern GPUs. It would be like a P4 2.53GHz computer going against a 256 node Beowulf cluster of P3 1GHzs doing something like RC5DES. The P4 is fast, but in this instance the P3s are much, much faster.

Of course, my analogy doesn't completely work because the prices are no where equivelant like they are with CPU v. GPU. Ho hum.
 

IcemanJer

Diamond Member
Mar 9, 2001
4,307
0
0
Isn't it also true that GPUs tend to do more floating point calculations versus CPUs (they have relatively more integer calculations)?
 

kmmatney

Diamond Member
Jun 19, 2000
4,363
1
81
I thought the whole point of all the SSE instructions was to speed up 3D calculations. Certainly there are plenty of floating point SSe2 commands. Even with SSE2 though, you still only see maybe a doubling of speed. A cpu would have to be much faster to compete with a current GPU.