Discussion Process shrink of Core 2 Duo?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Jul 27, 2020
16,326
10,337
106
I can't help wondering. What's keeping Intel from doing a process shrink of Core 2 Duo running at 1GHz or so instead of releasing the anemic Atom? Surely, a shrunken modern version of Core 2 Duo would beat the Atom at both power consumption and speed, right?
 

DrMrLordX

Lifer
Apr 27, 2000
21,634
10,848
136
Been thinking of downsizing the 2500k rig, i could either find mini itx board for the 2500k or look into whats been on the low end that is super efficient. I focus so much on the high end as i buy that usually, i often forget how good the low end has been improved cause my focus is on the high end stuff of course, i couldn't care less usually for anything 2t or whatever in 2020 . No one asks me to build them computers any more, i gift my old parts these days when i upgrade so yeah i never look into low end stuff.

Kinda wanna put the eco mode back on this 3900x, i went back to full clocks but not before installing 4x 3,000rpm 140mm noctua fans so my rig sounds like a flight simulator. Funny thing is i sleep better at night with ambient noise and well nothing says noise like that right? I will feel kind of stupid to have the 727 on take off over here, sitting in eco mode as well. Doubt with my DRP4 it would even boost a hair more would it? May just keep it stock, kinda torn on it :p

End of day i think lower wattage certainly will lower the power bill, which is my goal. I love the insane idea of overkill cooling on a 24 thread chip in eco mode. Something about it perplexes me a bit.

Sadly, Intel hasn't updated Atom enough to really sell you a daily driver based on Goldmont+ or Tremont that will replace that 2500k (or a 3900x, heh). As far as overcooling an underpowered chip, basically, any gains you might make on the voltage/clockspeed curve from low temp come at a higher price power-wise thanks to physics. Plus if you run in eco mode, ambient cooling has less of an impact on temperature.

It can be seen that Core i7 went from 45nm to 32nm with an additional 400M or so transistors with roughly the same die size. What if it had just been a process shrink and the die size kept decreasing with transistor count remaining more or less the same. Merom shrunk this way could have been the Atom everyone would have loved but never got.

It never happened, and we'll never know, but optical shrinks of older CPUs wouldn't bring the power savings that Intel wanted (and got) from Atom. Plus again, if you go back to Merom (which is just Conroe), there's no integrated memory controller. You have to do more than optical shrinks generation-over-generation. Eventually you have to redo the cache design, add features (like new ISA extensions, of which there have been many), integrate features from the motherboard, and so forth and so on.

I don't know the total performance and power advancements going from 45nm to 14nm off the top of my head, but a Core i7 920 (for example) optically shrunk to 14nm probably wouldn't fit in the Goldmont power envelope (10w or less, typically) and wouldn't necessarily perform that much better either. You can get low-power mobile chips from Intel today that come close to Atom's power envelope at much better performance (at the cost of larger die area), and in theory IceLake-Y would be in that department if it were broadly available. That does more to highlight how few resources have been committed to Atom than anything else.
 

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,575
96
Sadly, Intel hasn't updated Atom enough to really sell you a daily driver based on Goldmont+ or Tremont that will replace that 2500k (or a 3900x, heh). As far as overcooling an underpowered chip, basically, any gains you might make on the voltage/clockspeed curve from low temp come at a higher price power-wise thanks to physics. Plus if you run in eco mode, ambient cooling has less of an impact on temperature.

I thought ALL this as well, i did overkill cool a i5 8400t build i had for a short time. I put on a Tisis cooler which is like a knock off of this DR4P but cheap and without fans it stayed under 60cel usually. I loved the idea enough i may try again. Not going passive on the cooler this time though. The noise really does help with sleep for me. Known others who seem to need ambient noise as well. I clock a extra hour on avg vs not having it. I fall asleep quicker too.

Check out these temps!

1595964226800.png
 
  • Like
Reactions: lightmanek

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,575
96
Right, but how much extra performance did you get out of the extra cooling?

I remember hitting 67cel at times prior but the room was much warmer too . I believe i only gained like 50mhz . I doubled the cfm of what i had going in the front, then added about 50% or so to the rear. Not a insane improvement i think for all the extra noise. Thinking i may slap on the low noise adapters i got sitting in the closet, see what difference there will be.
 
Jul 27, 2020
16,326
10,337
106
I don't know the total performance and power advancements going from 45nm to 14nm off the top of my head, but a Core i7 920 (for example) optically shrunk to 14nm probably wouldn't fit in the Goldmont power envelope (10w or less, typically) and wouldn't necessarily perform that much better either. You can get low-power mobile chips from Intel today that come close to Atom's power envelope at much better performance (at the cost of larger die area), and in theory IceLake-Y would be in that department if it were broadly available. That does more to highlight how few resources have been committed to Atom than anything else.

Don't know about that. Out of curiosity, what makes you so sure? I'm just daydreaming and I'm not an engineer but maybe you are and you work with semiconductors for a living?
 

Thunder 57

Platinum Member
Aug 19, 2007
2,675
3,801
136
Well, if you know more, please enlighten us all. I'm all ears for sure.

Where to start? C2Q was two C2D glued together over a FSB. The FSB would have to go to something more modern. C2Q lacks any AVX and AES and possibly some SSE. It didn't have an IMC, so you'd need that. You'd also need to upgrade connectivity by adding PCIe lanes and USB 3.

Is that enough for you? I bet I could think of more but that will have to wait until tomorrow. The original in-order Atom was crap. It has gotten considerably better.
 

DrMrLordX

Lifer
Apr 27, 2000
21,634
10,848
136
Don't know about that. Out of curiosity, what makes you so sure? I'm just daydreaming and I'm not an engineer but maybe you are and you work with semiconductors for a living?

I'm a punter hobbyist. But I can maybe shed some light on the situation.

First thing you have to take into account when dealing with optical shrinks is that specific designs aren't necessarily going to shrink perfectly. With Intel in particular, there's the risk of "empty space"/gray silicon in trying to move a design to a new node. If I had the time to look up 45nm Nehalem vs 32nm Nehalem (which showed up in some Xeons, which were very popular around here), we could look at die sizes and get an idea of exactly what would happen to Nehalem specifically in making a die shrink. I think 32nm Nehalem is Westmere? Yeah, it is:


Anyway the shrink is never perfect, and you are not going to get the full power benefits from the shrink either - typically the cited power reduction at isoperformance pertains to SRAM cells.

Secondly you have to look at the performance benefits (@isopower) when attempting a shrink and realize that since you're talking optical shrink, performance increase = clockspeed increase, if possible. FMax (maximum stable frequency) for the design can be limited by any number of factors, including process and cache latency. I'm assuming that shrinking Nehalem further from 45nm to 14nm would possibly increase FMax if you're talking 14nm+++++++(+) since it is a high-clocking node, but if the target were 22nm then FMax might actually decrease. If the clockspeed target were only ~3 GHz or so on Intel 45nm, it's reasonable to expect that you'd get full benefit of performance @ isopower (that is, higher clocks for your old design) shrinking to any of Intel's newer nodes, assuming the cache configuration can handle it. There are some desktop designs that were clock-limited by cache, such as . . . Carrizo I think? Nothing from Intel's Core lineup comes to mind, but I'm not up on the Fmax limitations of Nehalem/Sandy/Ivy/Haswell enough to say for sure.

If you really want to get a better idea of what would happen if you were to try and shrink Nehalem all the way to 14nm, I would take a look at some resources describing the power and performance improvements of each Intel node from 45nm down to 14nm, and keep in mind that some of that data may be dated due to the numerous refinements Intel has made to the 14nm node (Broadwell's 14nm is not the same as Comet Lake's 14nm). If I had to guess, something like an i7-920 on 14nm++ would be maybe a 20-30W chip, and would probably clock about as high as Comet Lake. It would be small and would have lamentably poor IPC. And it would not support DDR4 which would be a bit embarassing.
 
Jul 27, 2020
16,326
10,337
106
The original in-order Atom was crap.
That's why I said it was a bone headed decision. They could easily have started from something more powerful. It's like either they didn't understand the consequences of going with in-order (not possible) or it was a stupid experiment of dipping their toe in the water to see the consumer reaction to an awful performing cheap piece of crap. I wonder if any Intel executive actually ate their own dogfood and tried to use an Atom based laptop/netbook on a day to day basis. Smaller companies release awful CPUs all the time but when Chipzilla releases something, it will be something in millions of units. Imagine the amount of silicon waste generated based on that one poor decision.
 

Doug S

Platinum Member
Feb 8, 2020
2,263
3,515
136
Intel's biggest problem with Atom (at least in the past) was that it was always on a trailing edge node. The ARM SoCs in Apple and Android products were made on leading edge foundry nodes which may have been behind Intel's leading edge at the time, but they were better than the N+1 and often N+2 nodes Intel was using for Atom.

That made sense financially since the margins on Intel's laptop and server CPUs are so much higher, but it doomed them in the market. Intel was being penny wise and pound foolish by not making at least some Atoms on a leading edge node so they had a credible product that could actually compete in flagship devices. By treating Atom as an afterthought and dooming it to trailing edge nodes, all the investment they made in its development and all the marketing money they gave away to bribe OEMs to put it in products ended up wasted.

Even now it only exists because Intel doesn't want to create SKUs for their laptop line that are too cheap, knowing OEMs would buy them for their low end stuff since they are "good enough". So instead they create a separate line, and try to handicap them in various ways to limit them to specific markets. Today's Atom is a product created by MBAs, arguing they should have made different engineering decisions about it is pointless.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,675
3,801
136
Intel's biggest problem with Atom (at least in the past) was that it was always on a trailing edge node. The ARM SoCs in Apple and Android products were made on leading edge foundry nodes which may have been behind Intel's leading edge at the time, but they were better than the N+1 and often N+2 nodes Intel was using for Atom.

That made sense financially since the margins on Intel's laptop and server CPUs are so much higher, but it doomed them in the market. Intel was being penny wise and pound foolish by not making at least some Atoms on a leading edge node so they had a credible product that could actually compete in flagship devices. By treating Atom as an afterthought and dooming it to trailing edge nodes, all the investment they made in its development and all the marketing money they gave away to bribe OEMs to put it in products ended up wasted.

Even now it only exists because Intel doesn't want to create SKUs for their laptop line that are too cheap, knowing OEMs would buy them for their low end stuff since they are "good enough". So instead they create a separate line, and try to handicap them in various ways to limit them to specific markets. Today's Atom is a product created by MBAs, arguing they should have made different engineering decisions about it is pointless.

I love all the MBA hate lately. I once had a boss who, no joke, writes his name with a comma and MBA after it in emails and LinkeIn. Pretty much nobody respected him. Pretty sure there was another one that put "(name), CISSP. Are they really that insecure? I used to work at a smaller end fab full of engineers and for the most part they were brilliant, nice people. The managers there made considerably less than most engineers, despite being their boss.
 

Doug S

Platinum Member
Feb 8, 2020
2,263
3,515
136
I love all the MBA hate lately. I once had a boss who, no joke, writes his name with a comma and MBA after it in emails and LinkeIn. Pretty much nobody respected him. Pretty sure there was another one that put "(name), CISSP. Are they really that insecure? I used to work at a smaller end fab full of engineers and for the most part they were brilliant, nice people. The managers there made considerably less than most engineers, despite being their boss.

I'm allowed to call out bad MBAs because I have an MS and an MBA.
 
Jul 27, 2020
16,326
10,337
106
I have an HP Classmate netbook that I hadn't booted in a long time. It has Windows 10 Version 1511 and powered by a Bay Trail Celeron N2805. It's been running Windows Update for the past two hours now and stuck at 15% in "Preparing to Install". I feel sad for any kid who has to put up with this as a primary PC.
 

Panino Manino

Senior member
Jan 28, 2017
821
1,022
136
That's why I said it was a bone headed decision. They could easily have started from something more powerful. It's like either they didn't understand the consequences of going with in-order (not possible) or it was a stupid experiment of dipping their toe in the water to see the consumer reaction to an awful performing cheap piece of crap. I wonder if any Intel executive actually ate their own dogfood and tried to use an Atom based laptop/netbook on a day to day basis. Smaller companies release awful CPUs all the time but when Chipzilla releases something, it will be something in millions of units. Imagine the amount of silicon waste generated based on that one poor decision.
"They could easily have started from something more powerful."

Makes me remember the competitor, Bobcat.
How Puma+ compares to current Atoms? Much better of much worse?
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Puma+, as it existed in the Carrizo-L products, was, while competitive in its day in its segment, not a great product. It was held back by the process mode that it was produced on considerably. The Intel Pentium J5005, based on Goldmont+, it is easily twice as fast in multi core than the A8-7410, which was the fastest Carrizo-L released. Both are 4/4 processors at similar clocks. The Pentium has dual channel ram and twice the L2.

the only advantage for Puma+ is the inclusion of AVX instructions. For the whole package, the 7410 has a better iGPU, but neither is worth writing home about.
 
Jul 27, 2020
16,326
10,337
106
After almost 12 hours, Microsoft gave up on trying to update Windows 10 to Version 1903 on the HP Classmate PC and now reports that Version 1511 is "up to date". Ha!
 

SPBHM

Diamond Member
Sep 12, 2012
5,056
409
126
After almost 12 hours, Microsoft gave up on trying to update Windows 10 to Version 1903 on the HP Classmate PC and now reports that Version 1511 is "up to date". Ha!

I've done this sort of windows updates with slower CPUs, it's often a problem with the internet being slow and also storage being limited and slow.
 
  • Like
Reactions: Zucker2k

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
I selected the N4000 because it made it easy to compare to a C2D (dual core, tends to run over 2Ghz on those tests),
the N4000 is from 2017/2018 and is commonly found now in under $100 devices, so not really a point to compare it to higher end hardware, there are faster CPUs from the same family of products (quad cores, higher clocks)

it's probably not a very good CPU, but it's as good or better (specially if running with the IGP) than a C2D that isn't clocked much higher.
I've been thinking more about the N4000, vs some of the machines I have in this house and their Geekbench scores, in the context of basic entry level usage (surfing, email, Netflix, etc).

440/800 - 1.1/2.6 GHz dual-core Celeron N4000

A. 200/360 - 1.3 GHz dual-core Pentium SU4100 (2009 11.6" Windows 10 laptop)
B. 330/600 - 2.26 GHz dual-core Core 2 Duo P8400 (2009 13" MacBook Pro)
C. 370/1270 - dual 2.66 GHz dual-core Xeon 5150 (2006 Mac Pro)
D. 730/1570 - 1.2 GHz dual-core Core m3-7Y32 (2017 12" MacBook)
E. 770/1420 - 2.34 GHz A10 (2019 iPad 10.2")
F. 845/2300 - 2.34 GHz A10X (2017 iPad Pro 10.5")

A. The Pentium SU4100 is essentially a slow Core 2 Duo, but the thing is basically unusable. Even with SSD and just running a single browser with no background tabs, it is so slow that I want to throw it in the garbage.

B. The P8400 is usable in 2020, but it is annoying slow at times, even for just basic usage. Don't try to surf on it with ads, because it bogs right down (even with 8 GB RAM and SSD). To make it decent, you need an ad blocker. Also, you can't run an always-on virus scanner on it because the overhead is too much.

C. I haven't had a lot of time to put the 2 x dual-core Xeon 5150 (65 W) Mac Pro through its paces yet, but for basic surfing (without a virus scanner) it is very decent even with ads active. I'd be fine with this level of performance if my workplace gave me one of these for office work. BTW, I had contemplated upgrading this thing to a dual 2.0 GHz quad-core Xeon L5335 (50 W), which would probably bring its Geekbench 5 score up to around 300/1800ish. However, I'm thinking this may almost be pointless for a surfing machine, as it's a significant downgrade in single-core performance, and the extra 4 cores might not be as beneficial as they could be unless you're multitasking more heavily. The more logical upgrade would be to something like the 2.33 GHz E5345 (80 W) or the 2.66 GHz X5355 (120 W) but at the risk of more fan noise, esp. with the latter chip.

D. For the 2017 Core m3 MacBook, it's good for this type of stuff. I'm happy with it and have no desire to upgrade any time soon. Not blistering fast but more than fine.

E. I find the A10 iPad's performance good too. Again, not blistering fast but more than fine.

F. The A10X iPad Pro is very good. The only time I notice any real slowdowns is if I'm trying to edit video or something.

Judging by all of the above, I would have a hard time recommending the N4000 to anyone buying a new machine in 2020, unless budget was the overwhelming concern.

If we are talking about a hypothetical Core 2 class chip shrunk down to use in 2020, I think it'd need to be a much higher clocked dual-core chip, or else Core 2 Quad.

BTW, which chips have SSE4.2 support?

I would love to see the Q6600 shrunk down, i love that processor. I used one last year in fact, it felt just fine with W7 with a ssd and a gtx650 and 8gb of ram. It was in a Dell inspiron 530 i believe or the 520, i did a tape mod to the G0 Q6600 and got 3ghz. Got to love that right?
According to Geekbench 5, the dual Xeon 5150 Mac Pro above is roughly equivalent to a Core 2 Quad Q9400 at stock clocks.
 
Last edited: