Xbit: Nvidia needs access to x86 for Long term survival

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Well seeing how graphics adapters are part of the reason people even upgrade computers, I dont see them going away.

Hell, even AMD says that we haven't seen anything yet as far as discreet cards.

"Fusion" and Intel on-die GPUs are only going to replace the current integrated graphics, not enthusiast cards.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
nothing if their current success level with graphics continues. they would have already cut out nvidia if they could, but that would just completely cede the high end gaming platforms to amd. and let's face it, as much as they dislike nvidia they don't really compete against each other in most markets right now. amd, otoh, is intel's primary rival in its primary market.

that article doesn't really offer much new info, though it was interesting to see that via's license now runs through 2018. imho via is too far behind for nvidia to get them up to speed for many years, and by the time they could get truly competitive the license would expire. plus, intel has a LOT more money to throw into cpu research between now and then.

nvidia would be better off merging with amd or getting bought out by intel. well, that or they can continue pushing the supercomputing envelope and hope that arm catches up with intel/amd.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
http://www.xbitlabs.com/news/video/...chnology_for_Long_Term_Survival_Analysts.html

Differing opinions?

Apparently Intel will continue the PCI-Express bus for another 6 years, but what happens after that?

From a performance and evolution standpoint, do you think GPU's will still be plugging into PCI-E slots in 4-5 years? In regards to the x86 situation, I think a low-end to mid-range x86 APU from Nvidia would be great competition to Intel. Easier said than done, though.
 

lambchops511

Senior member
Apr 12, 2005
659
0
0
The problem is that NVDIA is getting squeezed out of their traditional markets.

With both Intel and AMD integrating GPUs into their CPUs, and both producing and promoting their own chipsets -- this essentially destroyed NVIDIA's old cash cow (2 years ago) producing chipsets and low end integrated chips.

With the chipset business, both Intel and AMD can lower the prices of their CPUs, while making the margins on the chipsets. Especially for Intel, where their chipsets are produced using last-generation processes (basically free production cost for Intel).


This has left NVIDIA with its only traditional market left-- the discrete GPU market, medium and high end range for gamers. However, the problem with this market is still relatively small (small % of revenue compared to overall chipset/gpu market), and still face fierce competition from ATI.


Naturally, NVIDIA knows this and needs to do something to survive-- so they spawned several new businesses, the ION for smartphone and embedded devices, and sever CUDA. The value of CUDA is not selling it to home users for Photoshop, but rather the value is selling it to Wall Street Hedge Funds, Oil & Gas Companies, and Bio-Medical Corporations at 10x the margin.

On the smartphone side, they are pushing towards a high volume low margin business-- their major advantage is graphics, but there are also plenty of solutions/companies out there providing graphic pipelines for handheld devices (i.e. PowerVR, Qualcomm).

On the server side, they have to exploit this market while they are still in the lead. ATI/AMD has more or less abandoned this market for now due to the cash crunch 2 years ago. They decided to not design/implement the necessary logic/area required for GPGPU computing-- thus leaving the entire market for NVIDIA for now... However, Intel is very interested in pursuing this market.. Larabee was never meant to be a consumer product, but rather a massive floating point computational array aimed at the enterprise server market. Luckily for them, Intel has stumbled for now -- but Intel has a huge advantage over NVIDIA at this front. NVIDIA needs to pay TSMC for their wafers, while Intel owns their own fabs.


I highly doubt NVIDIA will ever produce a x86 chip. The x86 decoder is highly inefficient and power hungry. The problem w/ NVIDIA developing a x86 chip from scratch is the cost, and time required. They are still financially quite strong, but not strong enough to develop a x86 chip and have it potentially to fail while still pumping research dollars to the rest of its business. Also the time required, it'll take at least 4-6 years to produce a 1st generation chip, especially when the market is rapidly changing-- this is a very high risk to take. The other major problem w/ NVIDIA developing a x86 chip from scratch is lack of engineers to hire-- there aren't that many great CPU architects out there-- the good ones are all hired by IBM, Intel, and AMD. You don't just want to hire any architect, you want to hire the best-- and you also have to pay top dollar for that.

If NVIDIA does intend to enter the CPU market, I see them doing it w/ ARM, and with their GPU integrated onto it-- i.e. system on chip, so basically once again ION. The market has proved that the netbook market doesn't need Windows, they can survive with Linux. And if you can survive with Ubuntu, why do you need x86? When Linux/Ubuntu can run just fine with ARM.

Having said that, will NVIDIA purchase VIA? If they were in a better financial situation with better cash flow, then it is possible-- but currently right now? I would have to say no. Even tho the new GF104 Fermi chips are doing quite well, they need to replenish their war chest and still fund their CUDA business. Plus the VIA architecture isn't that great or something to be proud of (yes, its better than Atom, but only because Atom sucks ass and not because Via is amazing).

I would really like to see NVIDIA move more towards the ARM architecture-- since once you have a ARM license, you basically have the entire chip (i.e. no designing from scratch), and them from there you have much more endless possibilities based on an architecture that is less encompassed with legacy functionality.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Considering how long it has taken for a much larger, much more resourced, company like AMD to integrate ATI and produce a combined product (4-5yrs) it just doesn't strike me as all that viable for a smaller, less resourced, company like Nvidia to buy or resource a joint-project with an even smaller company like Via.

There simply isn't enough money in this equation...the left-hand side is not going to equal the right-hand side no matter how much hoping and dreaming is involved.
 

lambchops511

Senior member
Apr 12, 2005
659
0
0
Considering how long it has taken for a much larger, much more resourced, company like AMD to integrate ATI and produce a combined product (4-5yrs) it just doesn't strike me as all that viable for a smaller, less resourced, company like Nvidia to buy or resource a joint-project with an even smaller company like Via.

There simply isn't enough money in this equation...the left-hand side is not going to equal the right-hand side no matter how much hoping and dreaming is involved.

The problem for AMD was bigger, they were trying to integrated something developed on the SOI process w/ something developed on the bulk Silicon process.

Via Nano is manufactured on the TSMC process as well, so much of the design (schematic/layout) can be re-used.. but there's still a lot more work to be done then simply copying and pasting one logic unit with another logic unit to build an integrated CPU-GPU.
 

akugami

Diamond Member
Feb 14, 2005
6,210
2,551
136
Well seeing how graphics adapters are part of the reason people even upgrade computers, I dont see them going away.

Hell, even AMD says that we haven't seen anything yet as far as discreet cards.

"Fusion" and Intel on-die GPUs are only going to replace the current integrated graphics, not enthusiast cards.

The problem is that as process shrinks and upgraded GPU's, integrated GPU's become more and more powerful. Who's to say in 3-5 years time the average consumer won't be completely satisfied by the power of integrated graphics chipsets. Not to mention pretty much most systems will use the on die GPU and not take on the added cost of a discrete GPU unless it's on the upper mid to high end computers. This is nVidia's fear.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
The value of CUDA is not selling it to home users for Photoshop, but rather the value is selling it to Wall Street Hedge Funds, Oil & Gas Companies, and Bio-Medical Corporations at 10x the margin.

Could some of this be done in conjunction with IBM CPUs?

Lambchops said:
If NVIDIA does intend to enter the CPU market, I see them doing it w/ ARM, and with their GPU integrated onto it-- i.e. system on chip

ARM is becoming more interesting day by day (for me at least). But how far can that architecture scale up?

Secondly, do advantages exist for phone operating systems to reduce piracy? The way I am currently seeing things software pushes the development of hardware and anything that helps increase incentives for programmers would probably help quite a bit (all things being equal) IMHO.
 
Last edited:

lambchops511

Senior member
Apr 12, 2005
659
0
0
Could some of this be done in conjunction with IBM CPUs?

Sure -- but IBM Power/Cell Chips aren't cheap as well. Plus, they are still general purpose cores not optimized for pure number crunching.

ARM is becoming more interesting day by day (for me at least). But how far can that architecture scale up?

I personally like ARM because of its simple RISC core, its easy for compilers to optimize, and its easy to extend. It's more power efficient because it doesn't need a complicated decoder. However, it also isn't as good in areas of branch prediction -- it generally has a slower clock frequency (maybe do due process technologies usually used to make ARM cores, or they purposely sacrifice frequency for lower power consumption due to the embedded market ARM cores are geared towards).

Due to the small die size, I wouldn't be surprised if we see many-core ARM chips coming out in the next few years... think 64+ ARM cores in one chip aimed at the server market. The bigger problem is how to feed all these cores at once...
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I personally like ARM because of its simple RISC core, its easy for compilers to optimize, and its easy to extend. It's more power efficient because it doesn't need a complicated decoder. However, it also isn't as good in areas of branch prediction

Would it be possible for you to explain in simple terms why ARM isn't as good in areas of branch prediction? Is there anything ARM could do to fix this issue? Or is this something pretty much unchangeable?

Aznium said:
-- it generally has a slower clock frequency (maybe do due process technologies usually used to make ARM cores, or they purposely sacrifice frequency for lower power consumption due to the embedded market ARM cores are geared towards).

Yep, I have been wondering if ARM built on high power process would be achievable?

Or is the RAM stacked with the Processor a problem?

Certainly higher speed would be welcome for smartphones once 4G networks become more commonplace. According to this Anandtech review we are still network bound rather than CPU bound for 3G surfing.
 

Mr. Pedantic

Diamond Member
Feb 14, 2010
5,027
0
76
The problem is that as process shrinks and upgraded GPU's, integrated GPU's become more and more powerful. Who's to say in 3-5 years time the average consumer won't be completely satisfied by the power of integrated graphics chipsets. Not to mention pretty much most systems will use the on die GPU and not take on the added cost of a discrete GPU unless it's on the upper mid to high end computers. This is nVidia's fear.
Yes, there is a limit. And that limit is photorealism. We are not there yet. We are so far away from being there I don't think anyone even has any reliable predictions about when it's going to be. It's one thing for a system to render a 4-hour scene that is "photorealistic" (though still with visible aliasing), it is another entirely for a GPU to do potentially 120 of those every second, in 3D, for all sorts of different scenes.
 

lambchops511

Senior member
Apr 12, 2005
659
0
0
Would it be possible for you to explain in simple terms why ARM isn't as good in areas of branch prediction? Is there anything ARM could do to fix this issue? Or is this something pretty much unchangeable?

Branch prediction costs a lot of $$$ both in terms of research dollars and die area. ARM was traditionally aimed at the embeded market, you usually don't have a lot of logical branches in the apps that are ran there... and you also don't need performance... focus was on real time (real time means guaranteed minimal performance, rather than being fast) rather than unpredictable raw performance (i.e. what if the branch prediction misses?). ARM was traditionally used in just enough performance applications, i.e. just enough performance to make a cell phone call.

Branch prediction also uses up quite a bit of area, since we are mass producing these chips, ever mm^2 counts.

Yep, I have been wondering if ARM built on high power process would be achievable?

I am not familiar with the ARM process technologies. But I would imagine so, their pipeline is simple enough, should be relatively easy to scale up the frequency as long as you are willing to sacrifice power/area (i.e. bigger transistors to drive more current).

Or is the RAM stacked with the Processor a problem?

Not sure what you mean-- you can not easily or economically embed DRAM onto CMOS logic.

Certainly higher speed would be welcome for smartphones once 4G networks become more commonplace. According to this Anandtech review we are still network bound rather than CPU bound for 3G surfing.

Sure-- but we also want to look forward to ARM in more than just 3G phones ... i.e. 64 ARM chips on a server, each one serving a different Apache process.

Also, don't we all wish our 3G phones could do more? i.e. our iPhone run game run w/ better gfx, or render a webpage faster?
 

lambchops511

Senior member
Apr 12, 2005
659
0
0
The problem is that as process shrinks and upgraded GPU's, integrated GPU's become more and more powerful. Who's to say in 3-5 years time the average consumer won't be completely satisfied by the power of integrated graphics chipsets. Not to mention pretty much most systems will use the on die GPU and not take on the added cost of a discrete GPU unless it's on the upper mid to high end computers. This is nVidia's fear.

The average consumer is already satisfied by iGPU ... NVIDIA/ATI gfx market is relatively small compared to the overall chipset/gpu market. Intel has already been shipping wayyyy more units than NVIDIA+ATI combined for the past decade.
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
NVidia sees CUDA and HPC applications as the sole path to survival. Whether it pans out depends on luck for the most part. Developing a viable x86 alternative would require more time and money than NVidia currently has to survive. The only x86 design that could possibly make it to market in time is the old Pentium or 486 which were radiation-hardened for satellites. While it'd be interesting seeing one run Windows at 3GHz or so, I don't see many possibilities other than adding a couple to the GPU die to try and run Linux or something similar using just the GPU.

At any rate, discrete GPU's won't last. They've already ceded (or will cede in the immediate future) the low-end market to integrated graphics. As integrated gets better and the market for gamers becomes smaller, sticking to discrete is a sure way to slowly suffocate. The writing's on the wall and has been for some time. NVidia and ATI were both fielding requests from researchers asking for more hooks to run gpGPU code long before CUDA or even AGEIA for that matter.

Chipsets are not profitable, either. NVidia hasn't produced a competitive chipset in some time and it's not only due to licensing. There's simply no room in the market. Both Intel and AMD produce chipsets for their processors and they both have more experience and lower costs than NVidia. They can produce equivalent chipsets that are cheaper and run cooler and do it faster since they design the CPU's to run them in the first place. When AMD was only making CPU's, NVidia could still carve out a market, but after AMD acquired ATI, NVidia basically looked to the future and found itself staring at a closing door.

If I had to take a wild guess, I would say NVidia will develop its own CPU. Not x86, not ARM, nothing that currently exists. They will continue to develop graphics chips that incorporate more general purpose logic until they reach the point where they can run a complete OS directly off their GPU (probably won't be called that by then).
In the meantime, NVidia will continue to push CUDA and develop it. The idea is by the time their chips can handle any general purpose code, CUDA will be mature enough and the developers will have enough experience using CUDA to write more programs for it. Even so, the chip and CUDA will still be targeted primarily at applications using hundreds of processing cores. The idea is that either the programs themselves can use all those cores or the number of programs running concurrently will add up to use them all.
In terms of OpenCL, it remains to be seen whether AMD and Intel will bother helping and/or Apple has enough clout to push it forward fast enough to overtake the lead from CUDA. If the major manufacturers take too long to support OpenCL, it'll die a quiet death. On the other hand, if NVidia runs out of cash before it can develop CUDA to critical mass or that killer app appears, the whole market is basically screwed as well.

For NVidia, the interface doesn't matter. They've done PCI, AGP, PCI-E, as long as it's industry standard, they will use it. If they really do end up developing their own massively parallel CPU, it'll simply use whatever is the current industry standard interface at the time (or several of them) to talk to other devices.
There's really no way Intel can incorporate a competitive gaming GPU onto the CPU die anytime soon short of a miracle. Intel's track record with video cards is, in a nutshell, utter crap. While occasionally they manage to produce a card with decent, if not high end, specs, their drivers inevitably cripple it to the ground. Essentially an indication the market is not large enough to devote significant resources to development. At the same time, if programmers continue to extract more parallelism, Intel will continue to add more CPU cores, leaving less room for a GPU.
However, if Intel pulls a rabbit out of the hat and actually releases a viable Larrabee, then they may just manage to get a moderate GPU onto the same die.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
At the same time, if programmers continue to extract more parallelism, Intel will continue to add more CPU cores, leaving less room for a GPU.

The future of CPUs is heterogenous. Many simple cores + fewer more complex cores. The first step towards that is Llano/Sandy Bridge.

Theoretically the need for high end add on discrete graphics market will always exist as games become more complex. However, if Nvidia can't sell low-end discrete GPUs anymore, they will be having a hard time trying to amortize R&D costs. In the long term that means GPUs won't be profitable.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Not sure what you mean-- you can not easily or economically embed DRAM onto CMOS logic.

I am not an IT guy or engineer (I'm just a low end consumer), but a few days ago I read that some ARM designs used stacked RAM.

After you asked that question I found this ifixit article showing the Apple A4 processor dissection.

In step 2 they call the A4 a "package on package" with 3 dies.

In step 6 of the dissection they were able to show two RAM dies above the processor die.

This makes me wonder how difficult a higher power ARM design would be to cool? Surely have two RAM dies in between the CPU and heatsink would lower heat transfer?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I think it's nonsense really.
Firstly, there are plenty of ways to survive for nVidia... x86 is not the ONLY way. There's a huge embedded market, and nVidia is already actively involved with ARM cores. They could also go the way of Matrox for example, and build complete visualization/editing systems for video or medical purposes. Etc.

Secondly, even AMD is struggling in the x86 market. All other x86 clones have long died out. Even if nVidia DID get an x86 license (which they probably never will), that is no guarantee that nVidia can ever build anything even remotely competitive. They'll have to start from scratch.

And then I haven't even touched on the assumption that expansion slots will disappear from x86 systems... I don't see that happening, which means the rest of the entire article is based on nothing really.

In short, I think this whole x86-hype is total nonsense, started by some clueless 'analysts' who just want hits on their websites.
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
I think it's nonsense really.
Firstly, there are plenty of ways to survive for nVidia... x86 is not the ONLY way. There's a huge embedded market, and nVidia is already actively involved with ARM cores. They could also go the way of Matrox for example, and build complete visualization/editing systems for video or medical purposes. Etc.

Secondly, even AMD is struggling in the x86 market. All other x86 clones have long died out. Even if nVidia DID get an x86 license (which they probably never will), that is no guarantee that nVidia can ever build anything even remotely competitive. They'll have to start from scratch.

And then I haven't even touched on the assumption that expansion slots will disappear from x86 systems... I don't see that happening, which means the rest of the entire article is based on nothing really.

In short, I think this whole x86-hype is total nonsense, started by some clueless 'analysts' who just want hits on their websites.

There is a huge embedded market, where there is - and will be competition. It's also a market with small margins for hardware manufacturers. Nvidia will struggle in the monstrous notebook market without an x86 licens, losing out on notebooks and low-end OEM means losing a massive share of their profits. Nvidia will probably stay alive, but maybe as a smaller, nisched company.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
There is a huge embedded market, where there is - and will be competition. It's also a market with small margins for hardware manufacturers. Nvidia will struggle in the monstrous notebook market without an x86 licens, losing out on notebooks and low-end OEM means losing a massive share of their profits. Nvidia will probably stay alive, but maybe as a smaller, nisched company.

As I say, nVidia will most probably struggle with an x86 license as well. That's probably not even an avenue worth exploring. Anyone who thinks otherwise, is grossly underestimating AMD and Intel. All other companies that underestimated AMD and Intel have long gone bust.

And they only said "long term survival", nothing about maintaining their current size, assuming they are going to be muscled out of the discrete market (which I don't see happening, but hey), then their current size may simply be unmaintainable.
Matrox is nowhere near the company they were some 15-20 years ago, when they were at the top of the videocard market. But they still survive, don't they? So they did better than oh, say... 3dfx.
 

lambchops511

Senior member
Apr 12, 2005
659
0
0
I am not an IT guy or engineer (I'm just a low end consumer), but a few days ago I read that some ARM designs used stacked RAM.

After you asked that question I found this ifixit article showing the Apple A4 processor dissection.

In step 2 they call the A4 a "package on package" with 3 dies.

In step 6 of the dissection they were able to show two RAM dies above the processor die.

This makes me wonder how difficult a higher power ARM design would be to cool? Surely have two RAM dies in between the CPU and heatsink would lower heat transfer?

That is just physically stacking several dies together. The advantage of doing so is mainly,

1] Space
2] Latency.. with the interconnects so much shorter, it has much lower latency, and probability a much lower voltage required (hence saving power) on the bus.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
I am not entirely convinced they need x86 to survive. The amount of devices that wont be using x86 in the next decade is going to be staggering. Like I said last Spring. nvidia will be around in 10 years. It just may be they arent a discrete graphics company anymore. But instead supplying HPC and mobile products. I dont see fusion and sandybridge as a threat to discrete graphics. But i do see them as a threat to the sub 100 discrete market.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,674
146
106
www.neftastic.com
Why would a peripheral and software company have any vested interest in designing and producing x86 hardware that is largely irrelevant to the functionality of their current product line?

If nvidia wanted to get into the PC business, yes, it would make sense. In fact losing out on the Intel licenses really did hurt them, as their chipset/motherboard business was a large part of their revenue stream for a while, but even that was in decline as Intel and AMD's chipsets continually got better to the point where nvidia's designs no longer held any advantage (including SLI).

x86 has nearly no place in the small embedded market, despite what Intel is trying to do. Nvidia can and should be happy licensing ARM and going with Tegra for their embedded needs. x86 is largely irrelevant except in terms of application compatibility, and even that is becoming moot - just look at Apple and Google. Their application platforms have nothing to do with x86, and are becoming more relevant to the world than Microsoft is.

Given all this, why would nvidia need x86? No reason other than just another market to toy in. Would it make a difference? Nope. Intel would crush them just as they did with Cyrix, Centaur/IDT, SGS Thompson, Nexgen and Transmeta, with VIA being the only "token" player left standing. The best nvidia could ever hope to be is an afterthought in the x86 world, because there is only one legitimate player: Intel (I am an AMD fan, but the only real reason AMD is still around is because Intel needs someone large enough to keep from being a monopoly at this point, the cross licensing agreements are just icing on the cake). The only reason this conversation ever even comes up is simply nvidia's ego. I think the last year may have slapped enough sense into nvidia though to keep their ego in check finally.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
I am not entirely convinced they need x86 to survive. The amount of devices that wont be using x86 in the next decade is going to be staggering. Like I said last Spring. nvidia will be around in 10 years. It just may be they arent a discrete graphics company anymore. But instead supplying HPC and mobile products. I dont see fusion and sandybridge as a threat to discrete graphics. But i do see them as a threat to the sub 100 discrete market.

Yea, I think for Nvidia to survive as the company they are today they may need to look into x86. But, I Nvidia doing fine, they'll just get into other markets. They may not be a GeForce-first company in the future, but may be a mobile graphics-first company who also makes discreet cards and Tesla parts. I don't know where their biggest focus will be, but I don't think not getting into x86 would be their downfall.
 

Meghan54

Lifer
Oct 18, 2009
11,684
5,225
136
I've just got a question about all this.

If Nvidia wanted to enter the x86 market, x86 licenses are needed, right?

So, where are they coming from? Purchasing VIA doesn't guarantee the x86 licenses transfer from VIA to Nvidia. The FTC settlement with Intel spells out what happens if a current x86 licensee is sold to another company.....Intel has to enter into "good faith" talks to set up the new licensing of their x86 IP, that's all. Nothing about any guarantees the IP licenses will transfer, and what Intel terms good faith talks would probably be seen as obstructionist by Nvidia, but all Intel has to do is "Try".

So, in the end, I'd almost think Nvidia buying out VIA would be a dead end.....leaving Nvidia with a cpu design that's slow, old, and essentially worthless without Intel's IP licenses.