How is entering the GPU industry too expensive?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

s44

Diamond Member
Oct 13, 2006
9,427
16
81
Maybe Imagination Tech could get back into the desktop space? Their PowerVR designs are powering most of the smartphone/tablet SoCs out there (Hummingbird, A4, A5, TI OMAP -- the others are powered by AMD, Nvidia, and ARM designs), as well as Intel's old GMA500. So they actually have competent engineers already...

Why they'd ever want to do that, I have no idea. But it's the only scenario I can imagine.
 

MagnusTheBrewer

IN MEMORIAM
Jun 19, 2004
24,122
1,594
126
There's also a huge cutout between engineering a new chip and putting it in production. When I worked at Motorola, the engineers were famous for designing chips that required redesigning the equipment that manufactured them with all the attendant quality problems.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I think you are overstating the relevance of the IP licensing angle

Actually, I think he's probably understating it by a lot. If an appeals doesn't come through soon, Apple is going to be barred from selling computers in certain countries with AMD GPUs because they lack the proper S3TC license and the chips use the technology. Relatively speaking, it is a very simple patent, but is just a singular example of the mine field of IP that is involved in the graphics industry and is a major reason why Intel failed despite throwing billions at the problem while already having enormous resources in every area outside of IP.

Back in the early days of real time 3D technology(think mid 90s) we didn't even now what we were going to be using for primitives(quads, voxels, polys etc). We didn't know what sort of render structure we were going to be using(ray, scene graph, raster), we didn't know how we were going to apply images on these surfaces(procedural shading, per face color fill, texture mapping etc). None of the things we take for granted today were obvious- and there are many different books written in that time era that makes this obvious. If you want to build a GPU today you have a couple of ways of going about it. You can sell your soul to nVidia- and they *might* be willing to license some crucial IP, and then strike a deal with AMD(their IP is much weaker then nV's, but still strong enough to shut any startup down). To give you an idea of how much of an obstacle the IP is we are talking about- you can't apply a filtered texture map to a polygon using dedicated hardware without a licensing agreement.

The alternate route is to swap to a completely different rendering type. There are numerous options that you could take, ray tracing being the most likely candidate as of this point, but then you run into a massive issue. You need to have software support to run on your hardware. If you look back at the nV1 you will see that nVidia decided to bet on quads being the dominant primitive. Carmack went with polys, as did 3Dfx, and that was that. It took nV years to recover and eventually assert their dominance. They are honestly very lucky to have survived that era. As of right now, there are hundreds of millions of devices in the world that use polygon based rasterized images and every software developer can count on that.

Intel tried to rock this boat and failed losing billions in the process. That in itself should tell you exactly how obscenely high the price of entry into this market it. One of the wealthiest companies in the world that already employs thousands of the best engineers in the world and has the clear fabrication lead in the world spent billions of dollars and failed to even show up to the fight.

The largest chip had more transitors on it than a Pentium3.

This is a good example. A company trying to design something as simplistic as being comparable to a Pentium 3 and was burning ~ $100 Million a year. That is pocket calculator complexity by today's standards. A miniscule little simplistic 9.5 Million transistor device. Back in those days when designs were very simple, a relatively modest sum of money could compete. Today we are looking at 3 billion transistors. Here legacy plays an *enormous* role. AMD and nVidia don't have to design a TMU, they don't have to design a triangle setup engine, they don't have to design a shader pipeline- they already have done this.

What we see today when we look at new GPUs is over a decade's worth of $100Million + per quarter R&D- none of that vanishes. Neither AMD nor nVidia are reinventing the wheel. Even if you look at AMD's upcoming shift in shader structure, it's that one element of the chip with some refinements throughout.

If you were to start today you would be likely looking at somewhere in the 15-25Billion transistor range for the part you are going to launch. Think about that for a bit. If every engineer you hired were capable of designing a chip with the complexity of a Pentium 3 by themselves you would need ~1,578 engineers(using the low end of the estimate). Finding *one* engineer on the face of the Earth capable of such a task within a decade I would say is nigh impossible. Finding fifteen hundred of them willing to work for peanuts?
 

A5

Diamond Member
Jun 9, 2000
4,902
5
81
As a new company, you'd be lucky to license all the patents you would need to design a GPU for $1B. On top of that you would need several thousand engineers (and you can't hire anyone who did GPU work at AMD/NVidia/Intel/Imagination/etc without running into even more IP issues...) to execute perfectly in hardware design, fabrication, QA, drivers, and software to even have a chance of being competitive when you are done.

Look at mobile SoCs, which are far less complex than modern GPUs, for example - Nvidia (which is one of the top engineering companies in the world) has a huge team working on their mobile products, using an architecture that is largely licensed from ARM, and their first product was a flop. Look at the kind of resources that TI, Samsung, Qualcomm, and Apple put into their mobile products as well...
 

WMD

Senior member
Apr 13, 2011
476
0
0
The common belief is Intel a failure in this area because of Larabee. However Larabee is not a direct competitor in the discrete GPU industry but more like a niche product focused on GPGPU computing.

Intel can definitely make a discrete GPU to compete with AMD or Nvidia if they wanted to. Considering their SNB IGP packs quite a punch taking up only a tiny fraction of a CPU die what is stopping them from scaling up and building a high end discrete GPU. They got the science and their 22nm 3D technology probably surpass what AMD/ NV uses at TSMC. But the profits are probably not worth the outlay.
 

A5

Diamond Member
Jun 9, 2000
4,902
5
81
The common belief is Intel a failure in this area because of Larabee. However Larabee is not a direct competitor in the discrete GPU industry but more like a niche product focused on GPGPU computing.

Intel can definitely make a discrete GPU to compete with AMD or Nvidia if they wanted to. Considering their SNB IGP packs quite a punch taking up only a tiny fraction of a CPU die what is stopping them from scaling up and building a high end discrete GPU. They got the science and their 22nm 3D technology probably surpass what AMD/ NV uses at TSMC. But the profits are probably not worth the outlay.

Well the idea with Larabee was to back up a big GPGPU chip (which also happened to be a bunch of simple x86 cores with what turned out to be AVX units) with a really good software renderer and only have fixed-function graphics hardware for certain things like applying textures. One of the goals of Larabee was to eventually produce a consumer GPU, but they couldn't get the software renderer to work at the speeds needed to be competitive. If you have access to the IEEE journals the articles from the time are pretty interesting.

I wouldn't be at all surprised if the ideas of the Larabee project to turn up in a future Intel heterogeneous CPU project, but they clearly decided that they couldn't compete in the dGPU market at this time.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
However Larabee is not a direct competitor in the discrete GPU industry but more like a niche product focused on GPGPU computing.

Care to explain why they had Abrash working on the project?

Larrabee was absolutely meant to be a discrete GPU competitor, it is the only way scales of economy could make it a viable defensive approach against nVidia in the HPC market.
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
Actually, I think he's probably understating it by a lot. If an appeals doesn't come through soon, Apple is going to be barred from selling computers in certain countries with AMD GPUs because they lack the proper S3TC license and the chips use the technology. Relatively speaking, it is a very simple patent, but is just a singular example of the mine field of IP that is involved in the graphics industry and is a major reason why Intel failed despite throwing billions at the problem while already having enormous resources in every area outside of IP.

Back in the early days of real time 3D technology(think mid 90s) we didn't even now what we were going to be using for primitives(quads, voxels, polys etc). We didn't know what sort of render structure we were going to be using(ray, scene graph, raster), we didn't know how we were going to apply images on these surfaces(procedural shading, per face color fill, texture mapping etc). None of the things we take for granted today were obvious- and there are many different books written in that time era that makes this obvious. If you want to build a GPU today you have a couple of ways of going about it. You can sell your soul to nVidia- and they *might* be willing to license some crucial IP, and then strike a deal with AMD(their IP is much weaker then nV's, but still strong enough to shut any startup down). To give you an idea of how much of an obstacle the IP is we are talking about- you can't apply a filtered texture map to a polygon using dedicated hardware without a licensing agreement.

The alternate route is to swap to a completely different rendering type. There are numerous options that you could take, ray tracing being the most likely candidate as of this point, but then you run into a massive issue. You need to have software support to run on your hardware. If you look back at the nV1 you will see that nVidia decided to bet on quads being the dominant primitive. Carmack went with polys, as did 3Dfx, and that was that. It took nV years to recover and eventually assert their dominance. They are honestly very lucky to have survived that era. As of right now, there are hundreds of millions of devices in the world that use polygon based rasterized images and every software developer can count on that.

Intel tried to rock this boat and failed losing billions in the process. That in itself should tell you exactly how obscenely high the price of entry into this market it. One of the wealthiest companies in the world that already employs thousands of the best engineers in the world and has the clear fabrication lead in the world spent billions of dollars and failed to even show up to the fight.



This is a good example. A company trying to design something as simplistic as being comparable to a Pentium 3 and was burning ~ $100 Million a year. That is pocket calculator complexity by today's standards. A miniscule little simplistic 9.5 Million transistor device. Back in those days when designs were very simple, a relatively modest sum of money could compete. Today we are looking at 3 billion transistors. Here legacy plays an *enormous* role. AMD and nVidia don't have to design a TMU, they don't have to design a triangle setup engine, they don't have to design a shader pipeline- they already have done this.

What we see today when we look at new GPUs is over a decade's worth of $100Million + per quarter R&D- none of that vanishes. Neither AMD nor nVidia are reinventing the wheel. Even if you look at AMD's upcoming shift in shader structure, it's that one element of the chip with some refinements throughout.

If you were to start today you would be likely looking at somewhere in the 15-25Billion transistor range for the part you are going to launch. Think about that for a bit. If every engineer you hired were capable of designing a chip with the complexity of a Pentium 3 by themselves you would need ~1,578 engineers(using the low end of the estimate). Finding *one* engineer on the face of the Earth capable of such a task within a decade I would say is nigh impossible. Finding fifteen hundred of them willing to work for peanuts?
I wouldn't pay engineers peanuts, but I would pay myself less if I thought it would help things out and I would employ the optimum number of people and make their work conditions as good as possible. A good entrepreneur also invests their money wisely and makes good use of capital. I'm sorry, but we actually don't how well nv and AMD use their capital.

Good point about the IP. I'm not surprised I was understating the IP barriers. Basically without IP, someone could take an nv or ati design, study it, figure out how to reverse engineer it, and modify it so it will be better, then sell it for less. If nv, ati, or whoever doesn't like the loss of legal priveleges, then they can always counter if they're creative enough.

As for nvidia/amd losing money because of no IP, being first to the market gets people a lot of money, and then building on what they initially brought to the market will prevent them losing money. Sure many corporations would has less profit without IP, but just because they would have less doesn't mean that they would have zero profit margins or a loss. Businesses would be a lot smaller, and there would be more competition without IP. And I don't think the costs benefit analysis should matter because we could argue all day whether it's good for some companies to make above market profit margins, as IP is totally unethical IMO.

I'm not saying IDC is wrong; he could very well be right and I could be wrong. But I can't be sure. He makes excellent points, but Warren Buffet actually lost money this past year from what I heard.

I generally tend to take an extreme minority position, and that's why I have faith in a lot of what I think.
 

Spikesoldier

Diamond Member
Oct 15, 2001
6,766
0
0
it just is. it would be like starting up your own international airline, unless your name is Richard Branson and have billions of pounds sterling in his scrooge mcduck money bin.
 

Capt Caveman

Lifer
Jan 30, 2005
34,543
651
126
I wouldn't pay engineers peanuts, but I would pay myself less if I thought it would help things out and I would employ the optimum number of people and make their work conditions as good as possible. A good entrepreneur also invests their money wisely and makes good use of capital. I'm sorry, but we actually don't how well nv and AMD use their capital.

Good point about the IP. I'm not surprised I was understating the IP barriers. Basically without IP, someone could take an nv or ati design, study it, figure out how to reverse engineer it, and modify it so it will be better, then sell it for less. If nv, ati, or whoever doesn't like the loss of legal priveleges, then they can always counter if they're creative enough.

As for nvidia/amd losing money because of no IP, being first to the market gets people a lot of money, and then building on what they initially brought to the market will prevent them losing money. Sure many corporations would has less profit without IP, but just because they would have less doesn't mean that they would have zero profit margins or a loss. Businesses would be a lot smaller, and there would be more competition without IP. And I don't think the costs benefit analysis should matter because we could argue all day whether it's good for some companies to make above market profit margins, as IP is totally unethical IMO.

I'm not saying IDC is wrong; he could very well be right and I could be wrong. But I can't be sure. He makes excellent points, but Warren Buffet actually lost money this past year from what I heard.

LOL and facepalm.

Yes, I should spend billions of dollars to develop something only to have someone steal my invention.

I generally tend to take an extreme minority position, and that's why I have faith in a lot of what I think.

Your OP highlights your lack of understanding of how the real world works. Based on your beliefs, the world would still be in the dark ages as you still believe in the gold standard and want to eliminate the desire to innovate. Again, how old are you and what do you do for a living?

Your example of Whole Foods is pretty funny, you still know that the CEO of Whole Foods makes millions of dollars a year, right?
 
Last edited:

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
I wouldn't be at all surprised if the ideas of the Larabee project to turn up in a future Intel heterogeneous CPU project, but they clearly decided that they couldn't compete in the dGPU market at this time.

http://www.tacc.utexas.edu/news/press-releases/2011/stampede

The cluster will also include a new innovative capability: Intel® Many Integrated Core (MIC) co-processors codenamed "Knights Corner," providing an additional 8 petaflops of performance.
 

A5

Diamond Member
Jun 9, 2000
4,902
5
81
I wouldn't pay engineers peanuts, but I would pay myself less if I thought it would help things out and I would employ the optimum number of people and make their work conditions as good as possible. A good entrepreneur also invests their money wisely and makes good use of capital. I'm sorry, but we actually don't how well nv and AMD use their capital.

If you have a magical formula or instinct to figure this part out, you would be making tons of money as an HR consultant and not posting on AT.

I also think you're vastly overestimating the first-mover bonus in the GPU space and dramatically discounting the value of IP to these companies. If you could create a part that was essentially a Radeon 4890 on a modern process for <$100, you would dominate the market. AMD and Nvidia wouldn't make enough money to survive and make new parts.
 
Last edited:

WMD

Senior member
Apr 13, 2011
476
0
0
Intel doesnt need to compete directly with AMD/ NV in discrete gpu market. Their latest HD3000 is over 200&#37; as fast as the previous generation GMA HD and already outperforms entry level discrete gpus like the 5450. AMD on the other hand takes 2 generations and 3 years for 2x performance jump for their flagship GPU (4870-6970). Their entry level gpu are as weak as ever. It won't take long for intel IGP to make entry - mainstream discrete GPUs redundant.

http://www.xbitlabs.com/articles/video/display/intel-hd-graphics-2000-3000_7.html

http://ht4u.net/reviews/2011/treibervergleich_intel_2361/index7.php

http://www.youtube.com/watch?v=g00Y5K-SE-4&feature=related
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
LOL and facepalm.

Yes, I should spend billions of dollars to develop something only to have someone steal my invention.



Your OP highlights your lack of understanding of how the real world works. Based on your beliefs, the world would still be in the dark ages as you still believe in the gold standard and want to eliminate the desire to innovate. Again, how old are you and what do you do for a living?

Your example of Whole Foods is pretty funny, you still know that the CEO of Whole Foods makes millions of dollars a year, right?
Like I said, we don't know what nvidia's and ATi's overhead is and how much they pay their employees. Ideas can't be stolen because once someone has it, they always have it.

I don't whether John Mackey still makes millions/yr.
 

bononos

Diamond Member
Aug 21, 2011
3,928
186
106
Like I said, we don't know what nvidia's and ATi's overhead is and how much they pay their employees. Ideas can't be stolen because once someone has it, they always have it.
.......

You're being philosophical and unrealistic (are you one of those Austrians?). Your no-IP stance only makes sense in an era where individual workers churned out products using their skilled hands using simple tools. So stealing 'IP' then could not reproduce the same products unless the rogue co. also had workers of the same skill.

Nvidia/ATI designs are embodied in their graphics cards so if a rogue co. steals the designs it can make products as good as Nvidia/ATIs without expending an extra drop of sweat. Its the same with books, music cds, designer fashion etc. Do you think pirate music cd sellers should use the defense by saying they didn't steal anything because Beyonce or whomever will always remember what they sang?
 

Ovven

Member
Feb 13, 2005
75
0
66
Intel doesnt need to compete directly with AMD/ NV in discrete gpu market. Their latest HD3000 is over 200&#37; as fast as the previous generation GMA HD and already outperforms entry level discrete gpus like the 5450. AMD on the other hand takes 2 generations and 3 years for 2x performance jump for their flagship GPU (4870-6970). Their entry level gpu are as weak as ever. It won't take long for intel IGP to make entry - mainstream discrete GPUs redundant.

First of all, Intel doesn't exactly have the same image quality as Nvidia/AMD, because they sacrificed that to boost their FPS. I'd like to see Intel's IGP actual performance without these shenanigans. Also Intel's driver support is absolutely horrible. The moment Intel releases a new generation of IGPs, it pretty much discards any support for older stuff.
 

WMD

Senior member
Apr 13, 2011
476
0
0
First of all, Intel doesn't exactly have the same image quality as Nvidia/AMD, because they sacrificed that to boost their FPS. I'd like to see Intel's IGP actual performance without these shenanigans. Also Intel's driver support is absolutely horrible. The moment Intel releases a new generation of IGPs, it pretty much discards any support for older stuff.

Their image quality has actually improved alot over the earlier IGPs. Graphic anomalies are few now. The driver support is alot better as well. The last few revisions improved HD3000/ 2000 performance significantly. Intel doesn't gain performance by intentionally cheating IQ. They can squeeze so much performance out of a 12 EU igp is really their silicon technology that allows clocks of 1.35ghz with ample overclock headroom to spare. IVB IGP on 22nm 3d transistors will be even more impressive.