Excellent question. Sounds like something bad was discovered during ES/QS stage, maybe forcing them to respin? Either that or their entire production cycle is broken and inefficient. Or both!What were nv, mediatek, & Microsoft doing in those 90 days ?
I mean in 3 years replace Mediatek with Intel too.It apparently taped out last December
But Charlie's leak published only this April
What were nv, mediatek, & Microsoft doing in those 90 days ?
No?Intel replacing its iGPU IP with Nvidia will for sure see delayed products.
Hm. What’s wrong with N1X then? Is it Mediatek fault?
I love how you have this picture saved.
If I'd have to guess, they have issues with integrating that NVIDIA ip and they have bugs that make the thing unstable.What's the problem with the display block? Does it suck compared to MediaTek IP?
It just flat out sucks.Does it suck compared to MediaTek IP?
Oh no it sucks all-around.If I'd have to guess, they have issues with integrating that NVIDIA ip and they have bugs that make the thing unstable.
I remember from the old days that Radeon had the best 2D engine, followed by Intel and then Nvidia. Still true?NV display cores are just kinda really bad.
Apple's a very strong contender to AMD now (better for the most part, really).I remember from the old days that Radeon had the best 2D engine, followed by Intel and then Nvidia. Still true?
Starting Wednesday, Oct. 15, DGX Spark can be ordered on NVIDIA.com. Partner systems will be available from Acer, ASUS, Dell Technologies, GIGABYTE, HP, Lenovo, MSI as well as Micro Center stores in the U.S., and from NVIDIA channel partners worldwide.
If the rumors are true, this will effectively be a paper launch since the number of units sent to retailers will be small.![]()
NVIDIA DGX Spark Arrives for World’s AI Developers
NVIDIA today announced it will start shipping NVIDIA DGX Spark™, the world’s smallest AI supercomputer.nvidianews.nvidia.com
DoA btw
video with tests
Really Why ?DoA btw
There are a few clear challenges working with the GB10. Somewhat surprisingly, video output is one of those areas that you think any NVIDIA product would nail. The Spark has been challenging to say the least. The LG OLEDs we have that are 1440p, display a garbled mess out of the HDMI port if set to 1440p output in the OS. Likewise, ultra widescreen monitors were a no-go.
At idle, when we did our power measurements last week, this system was idling in the 40-45W range. Just loading the CPU, we could get 120-130W. Adding the GPU, and other components we could get to just under 200W, but we did not get to 240W. Something to keep in mind is that QSFP56 optics can use a decent amount of power.
Also, in many of the AI inference workloads with LLMs we were using 60-90W and the system was very quiet. There is a fan running, but if you are 1-1.5m away it is very difficult to hear and it never hit 40dba when we were not stress testing the system.
It won’t be competitive. Both AMD and Nvidia missed the target with 256 bit memory. For LLMs, bandwidth is as important as compute and the Mac Studios have 512 bits up to 128 GB RAM capacity and 1024 bits up to 512 GB of RAM.Really Why ?