• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

The Intel Atom Thread

Page 98 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Why it is disappointing? Haswell GT1 has 10 EUs. 16 EUs would be pretty high for this SoC. We don't have a confimation for EU count on Cherry View/Braswell.

It is disappointing to me because those 16 EUs are goiing to drive up cost.

P.S. Not sure how much die size it takes up but I would rather have an Intel LAN integrated on die than a 16 EU iGPU for utility laptop and economy mini-desktop.
 
It is disappointing to me because those 16 EUs are goiing to drive up cost.

P.S. Not sure how much die size it takes up but I would rather have an Intel LAN integrated on die than a 16 EU iGPU for utility laptop and economy mini-desktop.


Braswell is lowcost so I wouldn't worry about the price. I don't see cost issues from 16 EUs on 14nm. It's a good sign that Intel takes their iGPU serious finally. But once again, we don't know how much EUs CHV/Braswell gets.
 
ARM SoCs GPUs are getting better and better, so Intel needs to also keep improving their IGPs, if they want to compete.
 
Braswell is lowcost so I wouldn't worry about the price. I don't see cost issues from 16 EUs on 14nm. It's a good sign that Intel takes their iGPU serious finally. But once again, we don't know how much EUs CHV/Braswell gets.

Bay Trail is 102mm2.

To me that already is a rather large chip for something meant to be low cost.

And looking at the following die shot , its 4EUs seems to take up quite a bit of die area realestate already:

BayTrail9.png



........ 14nm will allow Intel to put in more EUs but iif that number is 16 I'd imagine the die will remain fairly large.
 
Broxton should be 58mm², which is quite good for a high-end tablet+phone SoC (Apple's A7 is also 102mm²). Cherry Trail could be sub-50mm².

Source.
 
Bay Trail is 102mm2.

To me that already is a rather large chip for something meant to be low cost.

And looking at the following die shot , its 4EUs seems to take up quite a bit of die area realestate already:

BayTrail9.png



........ 14nm will allow Intel to put in more EUs but iif that number is 16 I'd imagine the die will remain fairly large.

Assume your wafer costs $3200, your yields are 90%, and die size is roughly square.

This implies ~586 die/wafer and 527 good die per wafer. $3200/527 = $6. Add in a buck or two for packaging and test, and you've got a $7-$8 chip to make.

Pretty cheap chip. Sell that sucker for $20 and you've got ~60% GM. It only gets better as the chips get more integrated and as Intel optimizes its designs for density.

Really, the economics of this are pretty fantastic for Intel. Increase your wafer cost by 22% for 14nm (this is the number given @ investor meeting) but then move your die size to ~60mm^2, again roughly square, and let's call it 80% yield and what do you get?

$3904 wafer -> 764 dies per wafer. 611 good die per wafer come out of the oven. Add again $1-$2 for packaging and test, and viola you've got a $7.50-$8.50 chip even if you assume materially worse yields.

Sell that sucker for $20 and once again, you've got high 50% GMs. Can't wait to hear how the "Intel sux" brigade wants to spin that ;-)
 
Last edited:
As I understand it a lot of pre-existing x86 software runs fine without a large iGPU.

Yes.

But:

1) pretty tablet OSes use a lot more GPU than WinXP did, with all their swiping and stuff. And the screens are crazy-highres, which means more pixels to push. (1080p displays or larger are typical on "nice" tablets.)

2) primary things tablets and phones are used for are web browsing, multimedia playback, and gaming. Two of those three things benefit from some GPU oomph behind them.

3) Performance in the GPU space has been increasing faster than in CPUs.

We're talking about a mobile SoC that hasn't even come out yet - it's gotta be the chip the market wants a year or two from now, not necessarily the chip that would be best price/perf for running a Win7 desktop now.
 
as long as braswell doesnt have lower EUs compared to haswell GT1 it should be fine. it will have faster CPU and iGPU from anything ARM or AMD
 
Assume your wafer costs $3200, your yields are 90%, and die size is roughly square.

This implies ~586 die/wafer and 527 good die per wafer. $3200/527 = $6. Add in a buck or two for packaging and test, and you've got a $7-$8 chip to make.

Pretty cheap chip. Sell that sucker for $20 and you've got ~60% GM. It only gets better as the chips get more integrated and as Intel optimizes its designs for density.

Really, the economics of this are pretty fantastic for Intel. Increase your wafer cost by 22% for 14nm (this is the number given @ investor meeting) but then move your die size to ~60mm^2, again roughly square, and let's call it 80% yield and what do you get?

$3904 wafer -> 764 dies per wafer. 611 good die per wafer come out of the oven. Add again $1-$2 for packaging and test, and viola you've got a $7.50-$8.50 chip even if you assume materially worse yields.

Sell that sucker for $20 and once again, you've got high 50% GMs. Can't wait to hear how the "Intel sux" brigade wants to spin that ;-)

Nice, I was at the second line thinking who was doing this extensive calculation and obviously it was you 😛.
 
Broxton should be 58mm², which is quite good for a high-end tablet+phone SoC (Apple's A7 is also 102mm²). Cherry Trail could be sub-50mm².

Ayup, all indications are that Intel's 22nm -> 14nm scaling is quite impressive. Just unfortunate that we're likely still over 3 months from hard evidence of such in the form of products 🙁 But once either Broadwell or Cherry Trail does come along we'll get an idea of just how good 14nm actually is (Intel still hasn't given any details of consequence, no?) as well as how their Gen8 graphics performs - certainly has the potential to be exciting.
 
Yesterday I read an article (The Status of Moore's Law: It's Complicated) that states that the scaling of 22nm (from 32nm) was limited by the fact that Intel uses single patterning for the metal wires, while 20nm will use double patterning. So the metal layer pitch will be 80nm vs. 64nm, which is why Intel's 22nm isn't as dense as 20nm:

8857531-1393813792762722-ProfG.png


So that explains why Intel is aggressive on scaling at 14nm, going to a density of what TSMC would've called 16nm if they didn't take a scaling pause to implement FinFET.
 
Yesterday I read an article (The Status of Moore's Law: It's Complicated) that states that the scaling of 22nm (from 32nm) was limited by the fact that Intel uses single patterning for the metal wires, while 20nm will use double patterning. So the metal layer pitch will be 80nm vs. 64nm, which is why Intel's 22nm isn't as dense as 20nm:

8857531-1393813792762722-ProfG.png


So that explains why Intel is aggressive on scaling at 14nm, going to a density of what TSMC would've called 16nm if they didn't take a scaling pause to implement FinFET.

TSMC 20nm and 16FF BEOL will have a 64nm M1 pitch. Intel is likely to offer a more aggressive M1 pitch @ 14nm given that this is what these numbers are talking about.
 
And 14nm will be exciting for yet another reason. Dennard scaling ended at 120nm; since then transistors didn't shrink a lot anymore. It'll be interesting to see if (14nm) Tri-Gate can sort of revive Dennard scaling, like this picture suggests:

lithot1.jpg

1951338

Source.
 
Huh, well with Qualcomm's announcement today it'd appear as though Cherry Trail's primary competition from them is 'only' going to be the Snapdragon 805. Then it'll be either the A15 or Denver based K1 from NVIDIA, most likely also still on the 28nm HPM process. And lastly Apple, which may well be the only SoC this year to be made on TSMC's 20nm process.

On a side note, quite surprising to see Qualcomm using ARM-designed cores. Makes me wonder if they're having issues with their own design and this is the backup plan.
 
@Khato - i dont think its design issues. i think its the quicker way to release a 64 bit high end soc instead of waiting for their own design to complete
 
On a side note, quite surprising to see Qualcomm using ARM-designed cores. Makes me wonder if they're having issues with their own design and this is the backup plan.
My bet is that they didn't see 64-bit Apple coming that quickly along with Intel.
 
Yes.

But:

1) pretty tablet OSes use a lot more GPU than WinXP did, with all their swiping and stuff. And the screens are crazy-highres, which means more pixels to push. (1080p displays or larger are typical on "nice" tablets.)

I did find this video of a Windows 1080p Tablet swiping back and forth:

http://www.youtube.com/watch?feature=player_detailpage&v=h8d6J-Mg9zI#t=53

Although I'd hardly call the swiping in the video a complete demonstartion, it did look pretty good to me.

2) primary things tablets and phones are used for are web browsing, multimedia playback, and gaming. Two of those three things benefit from some GPU oomph behind them.

Here is a video of Bay Trail Z3740 playing a 1080p you tube video:

http://www.youtube.com/watch?feature=player_detailpage&v=p-7IAArxKHU#t=57

So for a low cost non gaming chip aimed at 1080p level performance it doesn't seem a lot of EUs is needed.....and that is for the low TDP Tablet chip. A low cost Laptop or Desktop chip would have even higher GPU clocks with the same 4 EUs...and better performance.


Performance in the GPU space has been increasing faster than in CPUs.

That is true, but it doesn't mean Intel needs to follow this trend in all segments. In fact, I have to wonder if making the 14nm chip even cheaper to buy (with less than 16 EUs and maybe even dual core) would be a better choice in the long run. This provided the SATA/PCI-E remained for budget laptop and desktops.

We're talking about a mobile SoC that hasn't even come out yet - it's gotta be the chip the market wants a year or two from now, not necessarily the chip that would be best price/perf for running a Win7 desktop now.

Another thing to consider is that screen resolutions are rising, but how soon can we expect to see 4K (or even 1440p) for the average budget user atom is aimed at? I'm thinking 1080p will be around for quite a while for frugal users (especially on desktop and probably low cost laptop too)
 
Last edited:
Assume your wafer costs $3200, your yields are 90%, and die size is roughly square.

This implies ~586 die/wafer and 527 good die per wafer. $3200/527 = $6. Add in a buck or two for packaging and test, and you've got a $7-$8 chip to make.

Pretty cheap chip. Sell that sucker for $20 and you've got ~60% GM. It only gets better as the chips get more integrated and as Intel optimizes its designs for density.

Really, the economics of this are pretty fantastic for Intel. Increase your wafer cost by 22% for 14nm (this is the number given @ investor meeting) but then move your die size to ~60mm^2, again roughly square, and let's call it 80% yield and what do you get?

$3904 wafer -> 764 dies per wafer. 611 good die per wafer come out of the oven. Add again $1-$2 for packaging and test, and viola you've got a $7.50-$8.50 chip even if you assume materially worse yields.

Sell that sucker for $20 and once again, you've got high 50% GMs. Can't wait to hear how the "Intel sux" brigade wants to spin that ;-)

I'm not the type of person to bet against intel after seeing what they did was Conroe, what they did with Haswell (That mobile battery life improvement was huge), and what they'll do to revolutionize the ultramobile sector (phone/tablet) with new chips after BayTrail.

If I Was a betting man, I'd pick up some intel stock as I think they'll begin to gain some traction when they break their first major phone deal.
 
@Khato - i dont think its design issues. i think its the quicker way to release a 64 bit high end soc instead of waiting for their own design to complete

+1

It's a hedge in case the market starts demanding 64 bit before Qualcomm comes to market with their own 64 bit cores. These will still be decent chips since Qualcomm will have competitive graphics and best in class LTE but they would definitely take a margin hit with these parts as they don't have that much to distinguish themselves from the competition, especially Nvidia & MediaTek.
 
@Khato - i dont think its design issues. i think its the quicker way to release a 64 bit high end soc instead of waiting for their own design to complete

My bet is that they didn't see 64-bit Apple coming that quickly along with Intel.

Possible, but doubtful. No one expected an ARMv8 design in 2013, but everyone has been expecting them to come out in 2014/2015 since ARM's announcement to that effect back in 2H 2012. It certainly could be the case that they needed more design time, but that'd be somewhat peculiar given how effective their CPU core design team has previously been. Which would leave the possibilities of their having issues with their custom core and going to the backup plan or that they're just going to use stock ARM cores from now on and have shifted their CPU core design team to work on graphics too. Who knows?

Regardless, it definitely makes for a nice opportunity for Intel and/or NVIDIA. Though especially Intel as it looks like it just might have 14nm out on Cherry Trail before any non-Apple SoC gets out on 20nm.
 
Back
Top