The effect is minimal. You are talking 5% at best.
Load will saturate TDP so you have crap battery life anyway, and people don't care about extending that by 10-20% over better performance. Bursty workloads go nowhere near saturating TDP even at the peak, where it's there for 2-3% of the time.
AMD had copper interconnect in 0.18u but Intel's transistor performance was 20-30% ahead. Checking boxes is one thing, but you still need to do work to do better.
Regarding 18A: Roughly equal density to N3, and beating performance in chips that need high performance. In low power environments it's probably close.
Seems to me that battery life is pretty important in thin and light laptops.
18A may turn out better than you think with the inclusion of BSPD.
Right, and Intel revised Intel 18A performance estimates as being 15% faster than Intel 3. Previously it was Intel 20A was 15% faster than Intel 3, with Intel 18A being an additional 10% over Intel 20A, if I recall correctly.
... and this may be a problem for Intel as they will be pitching an 18A that is roughly on par with TSMC's last generation N3X while TSMC's premium process will be N2.
Still, 18A will have BSPD which provides some pretty good chip wide efficiencies that are not reflected in the transistor specifications.
Density isn't the top target, efficiency & performance shouled be.
Agree; however, this philosophy hurts the desktop market in both overall performance, and die size cost. Still, it provides the greatest scalability for DC where the profit and growth are.
Depends on the software, some are licensed based on “core” count and don’t distinguish between physical or logical cores for licensing purposes.
Threads are not "cores". I have read the licenses to many softwares over carefully (although it has been a few years).
Correct me if I'm wrong, or is Netburst being a failure more hindsight is 20/20? This is years before I started following hardware, but I heard that when Netburst design first started, many believed clock speeds would just keep increasing, so focusing on clockspeed at the expense of IPC sounded like a good idea initially at the time the development started. Only later on was it discovered to be a terrible idea. Going from 2Ghz to 4Ghz is 2x performance, without any change to IPC. Intel was predicting publicly that they'd be hitting 10Ghz in a few generations.
Possibly. Still, I am an EE who graduated college in the 80's. There was a time when all the transistors in a chip turned full ON and full OFF and leakage was minimal when they weren't in use. Also, non-linear gate effects were so small that they were ignored in most calculations. As clock speeds got higher and lithography got smaller and transistors never really "turned all the way off and on" anymore, it was abundantly clear that tricks like raising clock frequency, raising core voltage, making transistors more leaky, etc in the interest of higher clocks would incur an exponential cost in power. I think Intel simply ignored the many "voices of reason" that tried to tell them.
Until Intel has an answer for X3D I don’t see them competing with AMD for the gaming crown anytime soon.
Not this go round anyway.
I'd say most of the extra performance of 13/14th gen over 12th gen was due to extra cache added and not memory performance. Also, you can't really compare intel and amd RAM controller 'quality' as they work in completely different modes - AMD can work both in gear1 and gear2 mode with DDR5, while Intel only does G2 or G4 even with Arrow Lake, Intel's controller is (or rather was) integrated into the ring bus while AMD one is connected via an interface that has inherent tradeoffs to allow it to scale better with more core clusters. Also, if your software is coded in a numa-aware fashion, you'd be able to utilize the bandwidth in a proper fashion in 1:2 mode with Zen4/5, contrary to the stigma of "amd ddr5 bad"
AMD focusing on high core count memory controller design is a good plan IMO. Data Center high core count processors are where the highest margin and most growth are projected.
Of course, it may hurt them in the Laptop market, but we will see.