News [Reuters] Nvidia eclipses Intel as most valuable U.S. chipmaker

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
(Reuters) - Nvidia (NVDA.O) has overtaken Intel (INTC.O) for the first time as the most valuable U.S. chipmaker.

In a semiconductor industry milestone, Nvidia’s shares rose 2.3% in afternoon trading on Wednesday to a record $404, putting the graphic component maker’s market capitalization at $248 billion, just above the $246 billion value of Intel, once the world’s leading chipmaker.


The AI bubble has been pretty amazing for NVidia!
 
  • Like
Reactions: Glo.

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Based on what?

With a full node advantage they are barely edging out the competition in performance per watt and their performance is several tiers lower, I went back and looked, we've never seen a full node advantaged part anywhere close to that bad.

If they are targeting a lower performance tier their performance per watt should humiliate the older process parts, if they are close on a performance per watt their absolute performance should crush the TitanRTX. Go ahead and check full node process advantages over the history of GPU technology, RDNA may be the poorest mainstream part ever on an engineering basis.

Now, as a consumer part it is a very different story obviously as nVidia is coming very late to the 7nm party. So the pricing combined with nVidia not offering anything yet makes it a competitive part.

Another way to look at it, how do you think the 5700xt is going to do against a $400 30 series part? When they are both on the came process the 5700xt is going to look shockingly poor. Again, that is from an engineering basis, not as a consumer part.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
With a full node advantage they are barely edging out the competition in performance per watt and their performance is several tiers lower, I went back and looked, we've never seen a full node advantaged part anywhere close to that bad.

Indeed. People claiming that AMD has finally have reached NVidias efficiency with RDNA/2 do not bother to look at what the node advantage alone brings to the table. And this is not even looking at missing features like raytracing, variable rate shading etc..
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Missed this post earlier.

If they can ramp it up to anywhere close to Turing size combined with IPC and clock speed increases, we should see some great competition in the GPU space.

The 5700xt is roughly 220Watts already, if it scaled perfectly at 440Watts it would be roughly 30-35% faster than the Titan, that's if nVidia didn't release anything new. For point of reference, I used double the 5700xt because it works out very closely to what a 300 watt part with 50% improvement(AMD's claim for RDNA2) would look like.

To hit "nVidia killer" they are going to need to do a *LOT* better than that.
 

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,620
136
The AI bubble has been pretty amazing for NVidia!
More like the CUDA bubble finally pays off for Nvidia. It's actually amazing how old CUDA already is (and still unrivaled at what it offers as a package), and that only now Nvidia is approaching the point where its CUDA-driven AI business is closing in (and going to surpass) its gaming business. I consider Nvidia eclipsing Intel as most valuable U.S. chipmaker a development that reflect the current situation rather well.

"Peak of Inflated Expectation" has been reached by the AI echo chambers. But AI is there to stay. It's "only" a matter of quality of the data, technical proficiency and creativity of those involved wanting to use it that's holding back more widespread practical use of it. Specialized hardware is for the companies at the forefront of the current AI development, but more general AI manufacturers like Nvidia already is and AMD and Intel want to become will allow for more widespread and democratized usage of AI. And that's what will be necessary to push AI into the mainstream, beyond opaque services in the cloud.

And regarding the CUDA bubble, it will be interesting to see how long Nvidia's competitors will take to supplant CUDA with equally powerful but more open development environments. AMD's ROCm should be bound to get a big boost thanks to its use at the upcoming exascale supercomputers. Intel's oneAPI also seems promising.