I guess he heard that from X accounts which are pretty unreliable.Where did you hear this? A reliable person or not?
You mean 2xx mm² for N48? I'm afraid it's wayyy bigger.Anyway N48 top have same mm²?
leaks always was 240-250You mean 2xx mm² for N48? I'm afraid it's wayyy bigger.
It's a tiny mainstream peanut.I'm afraid it's wayyy bigger
It doesn't?Hardware looks good
Well.... at least is better than the chinese option. If this is lolcow, how will fare the Chinese and the others?It's a tiny mainstream peanut.
It doesn't?
AD104/GB205 die area for this perf is lolcow stuff.
The Chinese are just using a pretty eh IMG implementation so that's not fair. Not really proper dGFX IP per se.If this is lolcow, how will fare the Chinese and the others?
Ok, so the B580 is supposed to be the Battlemage version of the A750 but with 12GB of vram vs. 8GB.Intel is claiming the B580 will perform between a 4060 and a 4060Ti. Closer to a 4060 performance. It sounds like a 3060Ti might edge out the B580. That is two generations old now. The 12GB of the B580 sounds nice and looks good on paper. N5 is not the correct node for Battlemage. Intel should have put Battlemage on their own 20A silicon.
N4P is 22% more energy efficient than N5. There is a 11% performance uplift over N5. I know those are just numbers but after putting in 22% better power efficiency. The B570 would be 117W instead of 150W TDP. Remember the 4060 was a 115W TDP. The B580 would be a 148W instead of 190W. I believe those numbers considering the 40 series Nvidia cards were on 4N which is about identical in performance and efficiency compared to N4P.Nvidia can make their GPU's on any silicon and they would still be good. The 4060 (4N) had a TDP of 115w. The B570 has a TDP of 150w and the B580 is 190W. I seem to be the only one here who admires highly efficient low power CPU's and GPU's.
The GPU isn't the only thing on the board using power. RAM alone is using (I think?) ~10 watts on the B570. https://www.igorslab.de/en/350-watt...ned-chip-area-calculated-and-boards-compared/N4P is 22% more energy efficient than N5. There is a 11% performance uplift over N5. I know those are just numbers but after putting in 22% better power efficiency. The B570 would be 117W instead of 150W TDP. Remember the 4060 was a 115W TDP. The B580 would be a 148W instead of 190W.
It's the future (approximately a decade ago). Nothing to fix there.They goofed not fixing the rebar requirement.
YesWhere did you hear this? A reliable person or not?
Not trueI guess he heard that from X accounts which are pretty unreliable.
Yeah, 45% bigger than AD106 with 50% moar membw for the same perf.Ouch
BMG-g21 still has less offchip b/w than ACM-G10 so improvements are there, but Intel PPA is still not very salvagable.Yeah N3B carried Lunar Lake hard
Why? Support has been there since Ryzen 3xxx and Intel 10th gen, and supposedly even many Intel 8/9th gen for some mobos. No reason to fix it.They goofed not fixing the rebar requirement.
When they say that, they're projecting hopium of Intel making NV GPUs cheaper.It's funny to see you tubers say intel is bringing competition and sticking it amd and nvidia..
Their uArch is still subpar. Unlikely to be DTCO, Intel had ample time for that.What’s the leading theory for the poor PPA? Architecture or lack of DTCO?
But LNLs iGPU wasn’t this bad…Their uArch is still subpar. Unlikely to be DTCO, Intel had ample time for that.
It doesn't look bad against a 2yo IP crippled with the lack of SLC and a tiny 2M L2. On a worse node.But LNLs iGPU wasn’t this bad…
With B580, Intel has a great chance against 4060, especially overseas. I don't see why it won't sell well. Their main objective is creating a customer base who will buy their future cards. With time, they should get better, especially since graphics technologies progress seems to have come to a halt. What's the last thing we saw? The impractical path tracing from Nvidia. And people forget Intel's "realtime raytracing" technology. If that actually gets implemented, both Nvidia and AMD could lose big when even Intel iGPUs will be doing 30 or even 60 fps raytraced visuals.When they say that, they're projecting hopium of Intel making NV GPUs cheaper.