intel to buy nvidia with jhh as ceo???

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Well, the title of the article does start with "Crazy Rumor". How can you attack someone's credibility when they put the word "crazy" right in the title?

Look around here, just because you might use a disclaimer leading into your then ludicrous claims doesn't mean you are going to retain your own credibility amongst your peers.

That is life in every walk of it, and I spare Theo no more quarter on that than he would on his peers.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
40nm to 22nm is a difference of a factor of 3.3. What fit in 500mm^2 on 40nm fits in 151mm^2 on 22nm which is similar to IVB size, and intel charges far less than $500 for that.

40nm to 22 nm is not 1 shrink. It's closer to (but not quite) 2.

Ivy has 1.4B transistors, GF110 has 3B, do you think they would achieve more than 2 times the transistor density of IVY? Anyway, it's pretty irrelevant GF110 won't ever see a die shrink. What I would like to see is GK110 made on 22nm process. Wonder how big it would be, it should be profitable as a consumer card but not with Intel hefty margins. See Xeon Phi. I'm pretty sure it would clock much better thnm TSMC's 28nm version, also it would be a game over for AMD's graphics department. Hell, I would like to see Tahiti made on 32nm Intel process and see how it would clock with proper cooling. Stock clock of 1.4GHz might be possible.
 

amenx

Diamond Member
Dec 17, 2004
4,521
2,857
136
^ Yep. Imagine Nvidia having their own dedicated fabs down the road and doing away with TMSC. They would walk all over AMDs graphics division. Would ultimately be a disaster for AMD as a whole and for end consumers overall. Hope nothing like this is ever remotely considered. But what if JHH no longer feels the same as he did years back and agrees to a merger with him not as CEO? Would it then be more likely to happen?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Would it then be more likely to happen?

The FTC allowing Intel and Nvidia to merge? Not without some HUGE concessions from Intel.

Consider the concessions Intel had to agree to in order to acquire just some of the DEC IP when DEC imploded 15yrs ago.

Intel is now only more dominant than they were then, I doubt the FTC would let them be more than a 5% shareholder in Nvidia.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Another issue is that nVidia and Intel in terms of corporate culture is like water and oil.
 

MisterMac

Senior member
Sep 16, 2011
777
0
0
One thing it would solve would be Intel's fab capacity issues tho.


It would take time - but 2 years after a merger...what a monster dGPU chip that would arrive.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
Intel would never take such a huge margin hit on their existing lines, but when it comes to new lines they might very well accept lower margins. I wish someone would do an analysis to give us an estimate on what kind of margins they would get producing 22nm dies for nvidia. The bigger dies, like GTX780 maybe even 770. My guess would be 50%, again assuming the fabs are as good as they seem to be.
 

tynopik

Diamond Member
Aug 10, 2004
5,245
500
126
It would be fascinating to see what he would do with Intel's resources

But that's as a bystander.

I have a feeling Intel's board would find it less 'fascinating' and more 'appalling'.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
Ivy has 1.4B transistors, GF110 has 3B, do you think they would achieve more than 2 times the transistor density of IVY? Anyway, it's pretty irrelevant GF110 won't ever see a die shrink. What I would like to see is GK110 made on 22nm process. Wonder how big it would be, it should be profitable as a consumer card but not with Intel hefty margins. See Xeon Phi. I'm pretty sure it would clock much better thnm TSMC's 28nm version, also it would be a game over for AMD's graphics department. Hell, I would like to see Tahiti made on 32nm Intel process and see how it would clock with proper cooling. Stock clock of 1.4GHz might be possible.

Perhaps IDC can provide some clarity, but different designs do achieve different densities. If I remember correctly, cache is very un-dense, as well as that apparently unused space below the igpu as well.

Ivy-Bridge_Die_Label.jpg
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Perhaps IDC can provide some clarity, but different designs do achieve different densities. If I remember correctly, cache is very un-dense, as well as that apparently unused space below the igpu as well.

Ivy-Bridge_Die_Label.jpg

xtor density will scale with target clockspeed within a given node. The denser the xtors, the slower the circuits they tend to form, the lower the overall clockspeed of the circuit.

You see this across logic, very dense gpu circuits targeting rather low clockspeeds (comparatively, versus mpu circuits built on the same node).

You also see this in cache, L1$ will be very low xtor density versus L3$ but it will also be clocked at the same clockspeed as the logic circuits and with very low latency. L3$ will not be clocked as high, but it will be much more dense.

This is a direct result of the drive current, which within a given process node is a function of operating voltage and xtor width. The wider the xtor the lower its effective density but the higher its drive-current. Make the xtor too wide and your circuit speed decreases because of wire-delay across the circuit, but your power-consumption climbs because of all that current flow. Make your xtor too narrow and your circuit speed decreases because the drive current decreases, but so too does power consumption.

It is all a delicate balancing act, but at the device physics level when we develop the xtors themselves we target specific clockspeeds (specific Idrives) and then push the dimensions of the sram cell such that we hit those targets while still producing a manufacturable sram cell that is reliable for the device's operating lifetime.

On a given node, just giving arbitrary numbers here but the ratios will be about right - we might design an sram cell for 800MHz operation in mobile devices which has a 0.1um^2 footprint but to scale that clockspeed to 4GHz the sram footprint (using the same xtors) has to balloon to 0.13 or 0.15um^2.
 

Haserath

Senior member
Sep 12, 2010
793
1
81
I could see this as a great thing for the companies and the industry. The only problem is will Huang be reasonable once a monopoly is achieved.

Reasonable in my opinion would be a much higher $ than most people here would find reasonable.

Reasonable would also include release time-frames.

Huang being CEO of NvIntel. That I would like to see.
 

IlllI

Diamond Member
Feb 12, 2002
4,927
11
81
amd should have bought nvidia and made jhh ceo. company would probably be in a much better position if they had done that instead of ati
 

pablo87

Senior member
Nov 5, 2012
374
0
0
regardless the rumor is false, it's very good idea for intel.

As a point of fact, discrete differentiates PC's from smartphones/tablets, in many markets and applications, Intel should stop wasting IGP silicon and promote discrete instead. ():)

WS and GPGPU is additive.

Intel would have to let go of NV's ARM biz to the usual suspects, and their Imagination shares too. Other than that, where's the conflict? Intel has quadrupled IGP performance, is heavily subsidizing it and discrete is hanging on to share quite well despite only 1 serious player remaining (AMD GPU product is serious (for how long?) but co. itself is not - it's a deluge of bad reviews on Glassdoor now...)

Also, JHH CEO is more qualified than most if not all Intel execs in the running, plus he'd be needed for transition, 2 yrs is enough.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
xtor density will scale with target clockspeed within a given node. The denser the xtors, the slower the circuits they tend to form, the lower the overall clockspeed of the circuit.

You see this across logic, very dense gpu circuits targeting rather low clockspeeds (comparatively, versus mpu circuits built on the same node).

You also see this in cache, L1$ will be very low xtor density versus L3$ but it will also be clocked at the same clockspeed as the logic circuits and with very low latency. L3$ will not be clocked as high, but it will be much more dense.

This is a direct result of the drive current, which within a given process node is a function of operating voltage and xtor width. The wider the xtor the lower its effective density but the higher its drive-current. Make the xtor too wide and your circuit speed decreases because of wire-delay across the circuit, but your power-consumption climbs because of all that current flow. Make your xtor too narrow and your circuit speed decreases because the drive current decreases, but so too does power consumption.

Thanks for explaining how those trade-offs work.
 

CHADBOGA

Platinum Member
Mar 31, 2009
2,135
833
136
Intel are not going to be buying Nvidia in the near future.

Nvidia's market capitalisation is quite high for what they do and if Intel wanted to buy them, then they would need to pay a hefty price premium over an already inflated price.

When you look at how little money is to be made from discrete GPU's, it would take Intel decades, if not a Century to recover the money needed to cover a current Nvidia purchase.
 

MisterMac

Senior member
Sep 16, 2011
777
0
0
What issue?
...that they're going to have increasingly trouble getting enough volume on bleeding edge to maintain the profitability on fabs?

Having Nvidia inhouse for this - would certainly help on having products with enough bleeding edge margins.

Altho - Phi definately would precent a weird challenge on what to do with nvidia's HPC products.


@Lonbjerg:
And i just met a person that writes tons of hatemongering - without even fully trying to understand the original intent of the poster.

Keeps hating on everything you see - with your own narrowed goggles.
Will do you good i bet.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
...that they're going to have increasingly trouble getting enough volume on bleeding edge to maintain the profitability on fabs?

Having Nvidia inhouse for this - would certainly help on having products with enough bleeding edge margins.

Altho - Phi definately would precent a weird challenge on what to do with nvidia's HPC products.

@Lonbjerg:
And i just met a person that writes tons of hatemongering - without even fully trying to understand the original intent of the poster.

Keeps hating on everything you see - with your own narrowed goggles.
Will do you good i bet.

You claim there is an issue. And simply guesses?

Lonbjerg was 100% right about you in his post.
 
Last edited:

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Intel are not going to be buying Nvidia in the near future.

Nvidia's market capitalisation is quite high for what they do and if Intel wanted to buy them, then they would need to pay a hefty price premium over an already inflated price.

When you look at how little money is to be made from discrete GPU's, it would take Intel decades, if not a Century to recover the money needed to cover a current Nvidia purchase.

Not that i believe the rumor at all, but you're forgetting their primary reason for a merger.

Patents.

This would allow them to use nvidia GPUs in their CPUs. The one place AMD still has an advantage over Intel.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Not that i believe the rumor at all, but you're forgetting their primary reason for a merger.

Patents.

This would allow them to use nvidia GPUs in their CPUs. The one place AMD still has an advantage over Intel.

I have a feeling those patents will come from somewhere else. Unless AMD spins of ATI.

But its true, Intels biggest issue is patents in the GPU area. However spending 8-10B$ to buy some patents and a slowly dying discrete segment is just plain stupid to get a 50%? IGP performance boost. Not to mention all the other issues.

This rumour keeps coming and going the last 7 years or so.
 
Last edited:

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
It's not that far fetched. Intel has two great strengths - their fabs and their traditional x86 cpu market. Outside of that they have a history of under performing. Nvidia is actually pretty good in a number of those under performing areas - mobile, hpc, graphics, software/drivers. In that way they are a pretty good match.

Then you look at JHH, who has a very good track record - despite being up against it he has positioned his company very well. Now is a major time of change in the market and Intel need someone who can make the risky choices to move the company forward - it's a real problem for Intel who are not very good at adapting having been in such a safe place for so long. JHH would be a very good choice for that.

Biggest problem would be the monopoly not allowing merger to happen.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I would say networking is also one of Intels great strengths. ;)

But again, this is the same senseless rumour we have heard year after year. Peaking every 3rd year. 2006, 2009, 2012. Funny enough always from the most trash "tech" sites with the least reliable writers behind. And the same attempts to excuse the rumour and find some way to justify it. Its also the same people behind the rumours every time.

And people tend to leave all the cons out and only look on the pros.

Atleast the "Intel is buying ATI" from 2006 as well dissapeared after AMD bought ATI.
 
Last edited: