How did the graphics accelerator become a success?

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
I once had a once "state of the art" voodoo2, and I know that the intel pentium II 333 with it couldn't have produced the same IQ at the same performance.

However, if I'm not mistaken, everything was once done in software and it was still orders of magnitude ahead of consoles and at least one of the first 3d accellerators was referred to as a "decellerator".

I mean, 256 color VGA Graphics looked a lot better than a Super NES in every way and the Voodoo2 was, as 3dfx would say later on, "so powerful it's really kind of ridiculous". That was true because forsaken on voodoo2 (8MB) would run at well more than 100fps @ 640x480 and still look worlds better than the N64 one. Doom was just as revolutionary in 1993 as Quake III which required a hardware accellerator was in 1999.

I don't necessarily think the CPU makers dropped the ball, but how the hell did we come to need hardware acceleration? Was it that performance in excess of 100 fps was necessary or was it that the game would be a slide show on the highest end CPUs or was it something else?

Do you think that intel will ever get so far ahead on CPU development that we'll one day return to everything being in software again? In other words, could CPU development disproportionately outpace the development of GPUs and then GPUs will die? Intel definitely has a process advantage, so I'm not so sure that it's impossible that the GPU will die. I believe that some programmers also see no advantage between genericized and specific function. After all, there is no longer any hardware audio accelleration and EAX 5.0 quality uses a negligible amount of the CPU if I'm not mistaken. It's just that there hasn't been much of a will for games to have revolutionary audio (lossless music, better fx, and filled with better fx) for quite some time so that's why we don't see it.

I know this is a retarded thread, but I think it's not a bad discussion.
 

Throckmorton

Lifer
Aug 23, 2007
16,829
3
0
CPUs aren't nearly as good at rendering in real time as graphics cards. Necessity is the mother of invention.
 

fourdegrees11

Senior member
Mar 9, 2009
441
1
81
I think in the future it will all be a single chip, but this is still a long time out before dedicated gpus go away. Intel, AMD, and Nvidia are all working in this direction.
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
Looking at wikipedia's VGA article, it seems I was mistaken when I said VGA was a software standard (VGA standard used its own memory for example). Did the VGA cards do the colors all by themselves and just left everything else to CPU or what? It doesn't mention transparency/lucency which some VGA games had, so I'm guessing it was a hybrid standard that was highly CPU dependent.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Looking at wikipedia's VGA article, it seems I was mistaken when I said VGA was a software standard (VGA standard used its own memory for example). Did the VGA cards do the colors all by themselves and just left everything else to CPU or what? It doesn't mention transparency/lucency which some VGA games had, so I'm guessing it was a hybrid standard that was highly CPU dependent.

IIRC until the late 90s with the introduction of 3D Accelerators, graphics cards only handled 2D related operations. 3D geometry, Transform and lighting, etc were handled by the CPU on the what was by then integrated FPU. The FPU itself is a good example of a piece of dedicated hardware eventually becoming integral to the CPU.
 

wlee15

Senior member
Jan 7, 2009
313
31
91
The early had a substantial amount of fixed function hardware (current GPU still has a lot) which allows the GPU to improve performance in an economical fashion. Also the pixel and vertex shaders were built around of the requirements of the Direct3d of it's time. For example DirectX 8.0 pixel shaders were 16-bit fixed point while vertex was 32-bit floating point. That means the top of line P4 2.8 Ghz of 2002 with it's MMX/SSE/SSE2 unit could do a maximum 11.2 million pixel shader elements(4 16-bit MMX/SSE2 op per clock) or 5.6 million vertex shader elements(2 32-bit SSE/SSE2 op per clock) per second. This contrasted with the 2001 Radeon 8500 which could do 8.8 million pixel shader elements and 4.4 million pixel shader. DX9 change the requirement of the pixel shader to a minimum of 24-bit floating point. That means that the P4 now could only do 5.6 million pixel shader or vertex elements per second while the 2002 era Radeon 9700 Pro could do 39.7 million pixel shader and 19.8 million vertex shader elements. GPU continue to dwarf the CPU in exection resources and no sign that will change in the near future.
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
I remember running Quake at like 320x240 or something before I got my Voodoo card.

It was actually smoother on my Pentium Pro at 320x240 than it was on my Voodoo at 640x480, but the difference in quality between 320x240 and 640x480 was HUGE. Not just the resolution, but the textures and such were a huge improvement.

The biggest thing was just having that additional processor to free up CPU cycles though. Share the load and you can carry more weight.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
I remember seeing my first game in 3D.
My brother had a much weaker pc than myself, whoever he got a voodoo 1 card as a present and suddenly the games looked much better, ran smoother and faster than on my pc.

how did GPUs become a success? they beat the living sh*t out of 2D acceleration. Suddenly you had no more squred pixels, but smoothed out spirits that looked alot better, ran alot faster and in higher detail/resolution than what was possible on 2D.


image004.jpg


back then it was a sepperate card, that you hooked up to your 2D card.



2D game: (~15 fps)
18062004-31.jpg



3D game: (plug in a voodoo card ~30 fps and looks much better)
18062004-32.jpg




Quake, Unreal, half-life, those games changed the pc industry (along with a ton of other games).
 
Last edited:

TakeNoPrisoners

Platinum Member
Jun 3, 2011
2,599
1
81
CPUs just can't process the information required for gaming as well as dedicated hardware can. GPUs are great at doing a lot of repetive work that does not require many decisions to be made.

CPUs can make many decisons but choke when it comes to rendering millions of similar polygons and pixels.

The reason early games may have been slowed down by dedicated hardware at first is because the games may have not been coded properly for the hardware. It has nothing to do with CPU makers not keeping up.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Anarchist420

even if discrete graphics cards die off (should that happend), there would still be a GPU on-die inside the CPU, dedicated to doing 3D rendering, aka a GPU.

Its here to stay, because its made specifically to do 1 thing, render 3d, which it does much much better than the CPU that a general perpose thingy.
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
I think those are good explanations. I think that intel's process advantage will probably eventually allow for a decent on-die GPU. However, the one with IB will certainly be worthless for gamers. Bandwidth is really the main issue with the performance of on die GPUs, but that could be fixed by using eDRAM or something like that.

Also, aiming for a TDP of 77W is only going to harm the performance of the iGPU. How much TDP increase would be caused per MB of eDRAM? If DX doesn't support tile based rendering, then I'm sure intel can work around that. However, I'm no expert on this subject so I'm sure they have good reasons for not trying to solve bandwidth issues with IB's iGPU. It would be great if we had a competitor to nv and AMD, but that probably won't happen.
 
Last edited:

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
@Anarchist420

depends on the gamer and the game :)
Ivy bridge is probably gonna be around Radeon 5550 level or so (slightly slower than Llano IGP)

Which is fine for most games @1360x1024 (?) resolutions on laptops with low-medium settings.



****** PS:AMD APU plays Crysis 2 on GIGABYTE A75M-UD2H mobo
http://www.youtube.com/watch?v=lth_M25cXjE

3Dmark Vantage score ~6100+ , on the IGP alone.
Crysis 2 avg over 40fps during gameplay, on the IGP alone.

a slight oc'ed Llano can reach discrete card 5570 speeds, which isnt great, but isnt really bad either.
 
Last edited:

Concillian

Diamond Member
May 26, 2004
3,751
8
81
I think those are good explanations. I think that intel's process advantage will probably eventually allow for a decent on-die GPU. However, the one with IB will certainly be worthless for gamers. Bandwidth is really the main issue with the performance of on die GPUs, but that could be fixed by using eDRAM or something like that.

This is why there will likely continue to be discrete graphics for quite some time.

Grandma wants to buy a computer, she doesn't want to pay for a hopped up GPU if all she uses is a browser. By the same token, she doesn't even want to pay for a quad core.

You're also starting to see the industry shift towards producing "good enough" products like Atom, and you're going to see that continue. For this reason, I forsee several years of "good enough" GPUs on the CPU and "gamer quality" discrete cards. I also see prices of anything gaming related (CPUs and GPUs both) going up as a result. Useed to be you could OC general purpose stuff and get a decent gaming machine. Not so much anymore, you need to buy targeted gaming hardware, and the value of that kind of stuff has gone down lately, IMO.
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
ASICs will always be several orders of magnitude better than software for a given process generation.
 

Puppies04

Diamond Member
Apr 25, 2011
5,909
17
76
The problem with saying that everything will end up back on your CPU is that whatever technological advances we see to cram more and more inside the CPU can be ported over to GPU's which can use a whole lot more space and have dedicated cooling.

The games industry would have to come to an advancement standstill for this to happen which at some point in the distant future might happen but at the moment every time a "new" faster onboard GPU comes out the games that it needs to run have gotten twice as complex. In a couple of years we might see a CPU that can run skyrim at full without the need for a DGPU but by that point the new elder scrolls will be out and still require a graphics card.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
I think the biggest issue is the Memory bandwidth.

Once IBM+Micron put out 128gb/s memory bandwidth sticks (more than 10x what we currently have), due to them stacking chips 3D like a skyskraper. We might just see PCs with 256 gb/s memory bandwidth for their IGPs.

If that happends, the sky is the limit (or TPD useage and cooling solutions).
A year or two from now (with that new tech out), we could be looking at faster than 5870 speeds for the IGP's of CPUs.
 

angry hampster

Diamond Member
Dec 15, 2007
4,232
0
0
www.lexaphoto.com
I remember seeing my first game in 3D.
My brother had a much weaker pc than myself, whoever he got a voodoo 1 card as a present and suddenly the games looked much better, ran smoother and faster than on my pc.

how did GPUs become a success? they beat the living sh*t out of 2D acceleration. Suddenly you had no more squred pixels, but smoothed out spirits that looked alot better, ran alot faster and in higher detail/resolution than what was possible on 2D.


image004.jpg


back then it was a sepperate card, that you hooked up to your 2D card.



2D game: (~15 fps)
18062004-31.jpg



3D game: (plug in a voodoo card ~30 fps and looks much better)
18062004-32.jpg




Quake, Unreal, half-life, those games changed the pc industry (along with a ton of other games).

Hell yeah Tomb Raider! :)

I remember playing MotoRacer 2 on our Compaq Presario when I was a kid. It had a Riva TNT chip and the game looked gorgeous. That was my first experience with 3D GPUs in a computer.
 

nanaki333

Diamond Member
Sep 14, 2002
3,772
13
81
someone already beat me to it, but....

a cpu is made to do many things ok. a gpu is made to do 1 thing great. :)
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
One thing I haven't really seen mentioned is parallelism. Sure, we have dual CPU (socket) systems and multi-core systems... and we have for some time. Back in the late 90's there was talk of the next generation of video cards being capable of per pixel shading... meaning, computing the color of each individual pixel on the screen. Then people started talking about putting two pixel pipelines in a GPU for twice the pixel processing power. Then you saw four... then eight... then 16... and so on. Now we have video cards with 1536 "pipelines" per GPU... and we have dual GPU video cards available. And the ability to put two dual GPU video cards in a single system for a total of 6144 "pipelines."

This is why we have discrete GPU's. Graphics processing is easy to "parallelize." Windows wouldn't run much faster on a 6144 core processor than it would on a 4 core processor because general tasks are more difficult to do in parallel, or they complete so quickly on modern CPU's that running them in parallel wouldn't be beneficial.

*EDIT* Now... if you could offload something like a SQL Server workload to a GPU, that would be interesting. We might buy $5,000 servers with $1000 video cards at work for our database servers instead of multiple 80-core $75,000 servers.
 
Last edited:

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
ASICs will always be several orders of magnitude better than software for a given process generation.

Very true. However, we've seen quite a shift in the design of GPUs over the last 10 years or so.

Originally, they were true ASICs. They had a number of fixed-function pipelines; texture filtering, matrix processing, Z-buffer, lighting functions. They would accelerate the operations that the designers had considered. If an application needed something that was not incorporated into the ASIC (e.g. Blinn-Phong shading) then the card couldn't do it, and the renderer would need to fallback to software.

The next trend was one of providing a limited range of programmability to the most important components of graphics rendering. First came "pixel shaders", which at the outset were a method of providing very simple programs for the final rendering stage. They couldn't do much, you specified a very limited set of operations (such as multiplications/additions/dot products, etc.) and that was it. Traditional computing concepts (loops, procedures and even things as basic as an "if" statement) weren't available for programmers. However, for things that could be represented in basic algebra (like Blinn-Phong shading) then these programmable shaders were a revolution.

Gradually, other components of the GPUs got programmability. Vertex shaders came next (which gave access to customised accelerated perspective/distortion processing), then geometry shaders (which allowed the programmer to create shapes and surfaces directly on GPU, rather than using the CPU to compute the shapes).

Up until a few years ago, you could still see dedicated "geometry", "pixel", "texture" pipelines on the GPU die. Nowadays, there has been a move to a more unified model. All the different pipelines have absorbed so much flexibily, that the better design is simply to have a generalisable processor core that can do all the necessary operations.

Now, of course, we have the concept of the GPGPU, and HPC techniques such as CUDA and OpenCL have pushed 3D graphics outside of the domain of gaming and specialist industries needing data visualisation into common science and industry uses.

In effect, a modern GPU contains hundreds or thousands of basic general purpose CPU cores. These cores are highly optimised for the type of vector operations used in graphics, and less optimised for complex branching, etc. (where the bulk of the complexity in a modern CPU lies).

So, efficient are these GPUs for certain types of parallelizable problem, and they are built on such state of the art processes, with optimised layout that they can outperform FPGAs and even ASICs in some circumstances. A number of groups tried to get onto the bitcoin bandwagon, and spent considerable resources developing FPGA "bitcoin mining" machines, with a view to ASIC production (in an attempt to corner the market). After the creating of a preliminary design, there was near universal realisation, that they would never be able to build a product that could outperform (either in speed, or energy consumption) certain models of GPU running a software program, by any kind of relevant margin.
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
I think the biggest issue is the Memory bandwidth.

Once IBM+Micron put out 128gb/s memory bandwidth sticks (more than 10x what we currently have), due to them stacking chips 3D like a skyskraper. We might just see PCs with 256 gb/s memory bandwidth for their IGPs.

If that happends, the sky is the limit (or TPD useage and cooling solutions).
A year or two from now (with that new tech out), we could be looking at faster than 5870 speeds for the IGP's of CPUs.

DDR4 is supposed to get rid of dual/triple/quad channel configurations in favor of point to point (ie you have as many channels as you have DIMMs) which means even mainstream rigs with only 4 dimms would effectively move from 128bit to 256bit memory configurations

the days of IGPs being neutered by memory bandwidth are soon to be over
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
I remember seeing my first game in 3D.
My brother had a much weaker pc than myself, whoever he got a voodoo 1 card as a present and suddenly the games looked much better, ran smoother and faster than on my pc.

how did GPUs become a success? they beat the living sh*t out of 2D acceleration. Suddenly you had no more squred pixels, but smoothed out spirits that looked alot better, ran alot faster and in higher detail/resolution than what was possible on 2D.


image004.jpg


back then it was a sepperate card, that you hooked up to your 2D card.



2D game: (~15 fps)
18062004-31.jpg



3D game: (plug in a voodoo card ~30 fps and looks much better)
18062004-32.jpg




Quake, Unreal, half-life, those games changed the pc industry (along with a ton of other games).

absolutely this

I remember playing Tribes on a 300MHz Pentium 2 with 64MB of RAM and a measly 4MB 2D video card and having to play that game in software mode, often having to suffer through extended periods of single digit frame rates, all in a competitive multiplayer game...

luckily the game was a class oriented game with tasks even I could accomplish (such as base maintenance), but the day I plugged in a Voodoo 3 3000 (coincidentally we had also just upgraded to cable internet, no more high ping for me) I felt like a blind man giving the gift of sight and immediately transformed from an average player to beastmode (also helped that my cable company actually ran a server on their backbone for a while, giving me a ping no higher than 8ms ;)).