• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

And Intel tries to enter the low-end...

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

bononos

Diamond Member
Aug 21, 2011
3,938
190
106
Figure out the group psychology of the investor and you could control the equities market.

I don't get the thinking either, to me a buck of profit is a buck of profit, but investors care about the "quality" of the profit. Bunch of nonsense.

That said, nonsense or not it is a way of business management that has resulted in a very different outcome for Intel versus AMD. So there may just be something to it.

Yeah but there is a difference in ROI and institutional investors want to feel that Intel is milking all that it can on easier pickings.

But I do agree that investors are rationally irrational and can kill a good company by rewarding ultimately bad decisions like pinching off r&d and kicking out good staff in the name of profit.
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
It'll be interesting to see how aggressive they are in power management, ie what the average level of power consumption is.

I agree that it doesn't look super-impressive, given that at best it matches what has already been out on the market for a while now (in power consumption), but Intel is definitely catching up w/ ARM in terms of perf/W.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I agree that it doesn't look super-impressive, given that at best it matches what has already been out on the market for a while now (in power consumption), but Intel is definitely catching up w/ ARM in terms of perf/W.

Yes...the article is comparing future atom SOC (built on 32nm) to current ARM offerings (built on 40nm/45nm)

You just have to wonder what is going to happen in 2013?

According to this slide Intel may actually be going larger for the future atom........

clipboard08pw.jpg


A few guesses:

1. We are seeing dual cores (for 2013 onward) being compared to single cores (2011 & 2012).

2. Assuming the dual core scaled 100% in the SPEC_rate benchmark we are looking at 50% increase in per core performance comparing 2013 and 2012. (Assuming the scale of the y-axis really does start a "0".)

A few questions:

1. Will the 2013 atoms use a special low leakage FinFET process tech specifically made for smartphone applications? (to help reduce the power consumption of these *possibly* larger atom cores)

2. Will Intel employ some type of "big.LITTLE" scheme for their x86? Or maybe an additional downclocked single core built on super low leakage silicon (like the Cortex A9 companion core found in Tegra 3) that allows a "big.LITTLE" type effect?
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
A few guesses:
1. We are seeing dual cores (for 2013 onward) being compared to single cores (2011 & 2012).

2011 is likely referring to Oak Trail, which is a single core, and 2012 is a dual core part. Notice the gap between 2011 and 2012 is pretty big as well. It's around 2.5x. That means a 2013 part will end up being a quad core chip. Of course its not all IPC, the clock is increasing too.

1. Will the 2013 atoms use a special low leakage FinFET process tech specifically made for smartphone applications? (to help reduce the power consumption of these *possibly* larger atom cores)

2. Will Intel employ some type of "big.LITTLE" scheme for their x86? Or maybe an additional downclocked single core built on super low leakage silicon (like the Cortex A9 companion core found in Tegra 3) that allows a "big.LITTLE" type effect?

1. The 2012 Medfield Atom will resort to low leakage 32nm process. Why wouldn't future generations?

2. Nvidia is going for mini core strategy, Qualcomm is not. On my 2600K, when running some really low demand CPU applications, the CPU goes back to lower clocks so fast that most frequency monitor programs can't detect it.

I doubt the benefits of the mini core approach. Why? Because if the big core executes instructions fast enough and goes to idle fast it might use less average power during the time than something slower that needs to spend more time on active vs idle. Not only that, the application and OS would have to be optimized for that scenario.
 

thescreensavers

Diamond Member
Aug 3, 2005
9,916
2
81
if battery is your concern, then intel medfield is not a viable choice.


Most of the battery is used by the Screen then the radio: GSM/UMTS, WiFi and GPS.

Having them integrated in the CPU does not cancel their power consumption, but helps A LOT.

Fixed. Lower powered screens will greatly improve battery life.
 

Khato

Golden Member
Jul 15, 2001
1,288
367
136
You just have to wonder what is going to happen in 2013?

We already know what's going to happen in 2013: Silvermont - http://www.anandtech.com/show/4333/intels-silvermont-a-new-atom-architecture

My guess is that the difference between the 2011 and 2012 marks is indeed the availability of a dual core variant. Why? Because I doubt that there's a core doubling between 2012 and 2013 - if there was then there'd only be a ~1.25x per-core performance gain with the new architecture which is way too low. Sure a 2.5x gain in per-core performance is pretty huge... but considering how anemic the current Bonnell architecture is I can easily believe it.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Sure a 2.5x gain in per-core performance is pretty huge... but considering how anemic the current Bonnell architecture is I can easily believe it.

LOL. Don't be ridiculous. It's not 2.5x, its almost 3x, and that would put performance per core at 40-50% better along with 2x the cores.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
How anyone can discount Intel's efforts into something is beyond me.

I'm going to wager Qualcomm, Nvidia and the rest of the folks making hardware for mobiles are all watching Intel's foray into this area with trepidation.

Intel comes into everything with the advantage of better process technology, proven success, the best minds in the business and boatloads more money than anyone else.

Since for all but PC power users mobile is the future, Intel will push in hard and will likely dominate.
 

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
^agreed.

(calling IDC?)

couldn't intel in theory press ahead with a smaller node with possibly more flaws but the dies would be so small that a lot would still be salvageable to press the tech advantage even more? e.g 14nm (or whatever) in 2013/14? where it's got too many flaws to get decent yields from your average intel cpus but tiny SOCs would have decent enough yields?
 

dealcorn

Senior member
May 28, 2011
247
4
76
I believe there have been prior discussions/speculation about Intel moving to an out of order core for Atom at 22 nm and the performance benefits that may bring. Intel does know how to build a pretty quick core but they have not spilled the beans on how much good stuff they put into Atom 22 nm. When the show starts, we will find out how accurate the slide is.
 

jpiniero

Lifer
Oct 1, 2010
16,839
7,284
136
I believe there have been prior discussions/speculation about Intel moving to an out of order core for Atom at 22 nm and the performance benefits that may bring. Intel does know how to build a pretty quick core but they have not spilled the beans on how much good stuff they put into Atom 22 nm. When the show starts, we will find out how accurate the slide is.

The impression that I got from Intel's IDF presentation was that the 2013 Atom is going to be a SoC based upon Haswell. Intel hasn't taken the market seriously because of the margins are so lousy so we won't really know what Intel is capable of until then at least.

Whether they can get design wins is another story.

Intel comes into everything with the advantage of better process technology, proven success, the best minds in the business and boatloads more money than anyone else.

ARM is designed around low power and high efficency, not to mention low margin chips. x86... is not. I do wonder if it would be possible/useful for Intel to develop a x86-ish chip which trades backwards compatibility for power consumption.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
ARM is designed around low power and high efficency, not to mention low margin chips. x86... is not. I do wonder if it would be possible/useful for Intel to develop a x86-ish chip which trades backwards compatibility for power consumption.

They certainly can. Prior to the 386EP, Intel made a 386 processor that booted directly to protected mode. So they certainly could remove features as they wanted.
 

Khato

Golden Member
Jul 15, 2001
1,288
367
136
LOL. Don't be ridiculous. It's not 2.5x, its almost 3x, and that would put performance per core at 40-50% better along with 2x the cores.

Quite correct. I'd not only failed at the simple visual comparison, but also was looking at the gains due to hyperthreading on a new architecture incorrectly.

All signs point to 2013 being a new architecture, most likely out of order. With that shift, the gains from hyperthreading on specint_rate will go down. Combined with the fact that doubling the cores looks to result in a 1.6-1.7x gain in specint_rate score (comparing an E8400 to a Q9650), there's then enough room in that ~3x performance gain to allow for doubling the cores and getting the kind of single-thread performance increase I'd expect. Say 1.7x from doubling the number of coures, 0.85x from hyperthreading being able to contribute far less to an OoO architecture, and then you have a 2x gain in single threaded performance. Which is about where I'd expect given Brazos single threaded performance.

All that said, I'll be quite surprised if Intel goes to a quad core so soon in the tablet space. The more likely explanation is a bit of creativity in the slide - who knows what exactly the 2011 performance mark is indicative of? It could easily be a 600MHz variant that was being used for an in-house development platform. (My favorite example of a deceptively creative Intel slide was demonstrating future integrated graphics performance compared to an AMD 6970... with a fine-print note stating that the 6970 scores were scaled down to 100W of power usage.)
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Wonder how a single core HT enabled Ivybridge core would look compared to Atom in the same power envelope. With their 22nm process it seems possible that a 1-1.5GHz low voltage solo IB would have impressively low power consumption and great performance. But would they be willing to sell it at a price that would see it in $200-400 tablets and notebooks?

That's the main thing about the Atom brand, they created it as a container for playing in the lower profit margin areas of the CPU/SoC market.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
2. Nvidia is going for mini core strategy, Qualcomm is not. On my 2600K, when running some really low demand CPU applications, the CPU goes back to lower clocks so fast that most frequency monitor programs can't detect it.

I doubt the benefits of the mini core approach. Why? Because if the big core executes instructions fast enough and goes to idle fast it might use less average power during the time than something slower that needs to spend more time on active vs idle. Not only that, the application and OS would have to be optimized for that scenario.

As I understand the situation, Nvidia uses the "companion core" strictly for idle conditions (eg, The faster clocked Cortex A9s on higher leakage process "race to idle" then the system switches over to the Cortex A9 "companion core" (on low leakage silicon) to lower idle power consumption)

"big.LITTLE" shares some similarities with "companion core" in that the system also switches over to Cortex A7 in order to lower idle power consumption. The big difference (as I understand it) between "big,LITTLE" and Nvidia's "companion core" is that ARM's reference design supplies two Cortex A7s so that in some cases light workloads can be done on the smaller cores.

Therefore I wonder if these future Atom CPUs have sufficiently low enough idle power consumption to compete with either Nvidia "companion core" or the small cpu core in a "big.LITTLE" configuration? Or will Intel have to implement their own small CPU core in order to keep up with any progression of ARM towards larger cpu cores in small form factor devices?
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
ARM is designed around low power and high efficency, not to mention low margin chips. x86... is not. I do wonder if it would be possible/useful for Intel to develop a x86-ish chip which trades backwards compatibility for power consumption.

I don't know all the details but I heard Atom and the Core series were not ISA identical.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
(My favorite example of a deceptively creative Intel slide was demonstrating future integrated graphics performance compared to an AMD 6970... with a fine-print note stating that the 6970 scores were scaled down to 100W of power usage.)

What is deceptive in that quote? I suppose Intel could have used 125 watts as a kinda rub on AMD . As intel will have No desktops chips in the segment were discussing. IB . is 77 watts so intel told it like they plan it . How it turns out is anyones guess . The smart money would go on Intel by a nose. Intel simply stated that befor AMD can scale down the performance of a 6970 into a fusion product they have to stay inside of power envolop
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Wondering what Intel are doing behind the curtain regarding development of an x86 interface to the GPU. They obviously haven't wanted to waste their Larabee R&D, ala Knightscorner. Seems AMD would also be at least toying with a similar x86-izing of a GCN+ arch.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
architecture, and then you have a 2x gain in single threaded performance. Which is about where I'd expect given Brazos single threaded performance.

The gap is not that big. It's around 40-60%.

http://www.xbitlabs.com/articles/cpu/display/amd-e-350_9.html
http://www.anandtech.com/bench/Product/110?vs=328

On there the iTunes benchmark only uses 2 threads, while all chips are dual cores(meaning Hyperthreading is useless). There its showing 45% gains. If Brazos was indeed 2x faster the gap would be far bigger elsewhere too.

(My favorite example of a deceptively creative Intel slide was demonstrating future integrated graphics performance compared to an AMD 6970... with a fine-print note stating that the 6970 scores were scaled down to 100W of power usage.)

That slide explicitely mentions "Tablets" and the fine print says "compared to second generation Tablets as baseline". I'd take first gen as Menlow and second gen as Atom Z670.

As I understand the situation, Nvidia uses the "companion core" strictly for idle conditions

http://www.anandtech.com/show/5072/nvidias-tegra-3-launched-architecture-revealed

Anand's own article says "lightly loaded scenarios", which isn't idle. What would be the point of getting a core just to idle when technologies like power gate completely turns off cores anyway?
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
^agreed.

(calling IDC?)

couldn't intel in theory press ahead with a smaller node with possibly more flaws but the dies would be so small that a lot would still be salvageable to press the tech advantage even more? e.g 14nm (or whatever) in 2013/14? where it's got too many flaws to get decent yields from your average intel cpus but tiny SOCs would have decent enough yields?

Yep, definitely. That's actually how a lot of IDM's manage their timeline for releasing a new node to manufacturing.

Those IDM's will do a conditional release based on the yields of small diesize designs (the 15-20mm^2 stuff) and in parallel to ramping that device to volume manufacturing they work on the yields (the D0 stuff) which in turn enables manufacturing on the larger die chips.

At TI there was probably a good 9-12 months lag between the leading edge of a node release (on the coattails of qualifying a small 10-15mm^2 chip for production) and the time when we attempted to put large chips (>300mm^2, the SUN chips) into production on the same node.

The bigger issue though, the gating issue bar none, in releasing a process node is not the economic kind (not the yields) but rather the technical kind (specifically the reliability). The last year, year 4 of 4, allocated towards developing a new node is spent attempting to get yields up while at the same time attempting to tune the process integration so as to hit the reliability specs.

If you can't hit the reliability specs then it doesn't matter whether you are making a 5mm^2 chip or a 500mm^2 chip, regardless the parametric or functional yields of those chips, the product is unsellable.

And so that is the fatal flaw in your proposed line of thinking, its not enough to just say you are going to rush a newer node into production while limiting its use to the smallest of small chips so you can outmaneuver the numbers game that is D0 and functional yield.

You still have to deal with the intrinsic reliability of your process integration, and since that tends to be the bigger issue from day zero the chances of your new node having no reliability issues but still having a functional yield issue are pretty slim. (it takes a TSMC to pull that off :D ;))

So you can still prioritize the timeline to accomodate smaller die sooner and larger die later, but the process technology itself needs to have already been optimized to meet the reliability spec and that alone takes the lionshare of R&D efforts in the last year of a node's development.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
But they don't have all the instructions that the other one has.

Yeah, really no two successive generations of microarchitectures contain the same ISA because usually the ISA is expanded every time the microarchitecture is changed.

Sandy Bridge and Ivy Bridge won't be the same ISA, certainly Medfield and Sandy Bridge aren't going to either.

(I know you know, am just expanding on your post for Lonbjerg's benefit)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
http://www.anandtech.com/show/5072/nvidias-tegra-3-launched-architecture-revealed

Anand's own article says "lightly loaded scenarios", which isn't idle. What would be the point of getting a core just to idle when technologies like power gate completely turns off cores anyway?

Doesn't at least one cpu core need to be on at all times? In other words, shutting off all cpu cores is not possible right?

If true, then having a cpu core dedicated to background tasks could save power.....if that dedicated cpu core uses less power than one of the main cpu cores.