Intel Quark architecture, "1/10th" the power use of Atom

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
While Saltwell is dual-issue and in-order like the original Pentium the similarities end there. It's a very different design with no common ancestry.

Quark does sound Pentium derived due to the mention of the ISA being "Pentium-compatible." There were instructions added to Pentium Pro (P6) like cmov, why else would they draw the line at Pentium ISA if there was no design relationship?

I wonder if it is derived from the cores found in MIC?? AFAIK,they were based in some part on the Pentium design.
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
Its about time one of the big players releases a fully synthesizable chip!

May this lead to better tools than the Xilinks garbage everyone has had to put up with for the past umpteen years.
 

Pilum

Member
Aug 27, 2012
182
3
81
Quark does sound Pentium derived due to the mention of the ISA being "Pentium-compatible." There were instructions added to Pentium Pro (P6) like cmov, why else would they draw the line at Pentium ISA if there was no design relationship?
This could simply imply x87 as being always present, or refer to the other new Pentium instructions: CPUID, RDMSR/WRMSR, RDTSC, CMPXCHG8B and RDPMC if you include the Pentium-MMX. I'd expect any new x86 to support CPUID and the TSC, and very likely at least some MSRs. Not so sure about PMCs, this seems a bit of an overkill for an embedded CPU.

But I don't think you can deduce anything about the microarchitecture from the fact that Quark will support extremely basic x86 functionality.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
I'm kind of wondering if it's a variant of the pentium derived core used in the Xeon Phi.

I doubt it's literally the case because Xeon Phi supports a subset of x86-64 without SSE2, so even ignoring the Phi specific vector instruction set it's still at a baseline much higher than Pentium-level.

I too was wondering if this had any connection to Claremont, but if that were the case you'd think NTV would have been dropped at some point in the description...

This could simply imply x87 as being always present, or refer to the other new Pentium instructions: CPUID, RDMSR/WRMSR, RDTSC, CMPXCHG8B and RDPMC if you include the Pentium-MMX. I'd expect any new x86 to support CPUID and the TSC, and very likely at least some MSRs. Not so sure about PMCs, this seems a bit of an overkill for an embedded CPU.

But I don't think you can deduce anything about the microarchitecture from the fact that Quark will support extremely basic x86 functionality.

I just think it's odd that you'd start with some post-Pentium design and take out cmov, or a completely new design and not have it (that'd at least be less weird)
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
If it is made in a last gen process (for intel) it might hint they just need to fill fab capacities of a node which currently isnt their main one.

I dont see much future into this, as the situation in the microcontroller market is similar to what is happening in the phone/tablet one, just taken to it's extreme: is it worth to pay a premium to shave just a few watts because of it being in a new process, instead of proven solutions made on really really ooooold processes, and thus cheap as dirt? There are some devices where that microcontroller's power consumption makes a great % out of the device's total, but on the great majority, it accounts for so little that it just defeats the purpose.
 

Pilum

Member
Aug 27, 2012
182
3
81
I just think it's odd that you'd start with some post-Pentium design and take out cmov, or a completely new design and not have it (that'd at least be less weird)
Post-Pentium is not an option anyway, they were all out-of-order and too complex; besides, you have Silvermont for this. The more I think about it, the more I'm sure this is a Siskiyou spin-off, as the base design was made fully synthesizable from the start. Might as well go with a clean base design and expand it, than having to fully reimplement an old architecure like Pentium for synthesizability. And Siskiyou seems to be intended as an ongoing program, so it will see future development which can be merged into later Quark versions.

And about the ISA... it really depends what was intended as a message. This just might have been meant as an "x87 is included" comment. The person making the comment may not have been aware that this could be taken as "there's no CMOV". CMOV certainly would be useful for an in-order architecture, so maybe it's in there. We'll just have to wait for more information.
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Fully synthesized? Nice. If Intel gets big into fully synthesizable circuits then they will do for synthesis tools what they did for computational lithography, and everyone will benefit from Intel's push for solid and reliable software design tools.

Stuff like that makes me wonder if upper management wants to step in. I already see some of our competitive advantages get shared out by improving synthesis tools to handle complex design topologies. Everyone gets to benefit from the work of someone on Intel's payroll. :p
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Stuff like that makes me wonder if upper management wants to step in. I already see some of our competitive advantages get shared out by improving synthesis tools to handle complex design topologies. Everyone gets to benefit from the work of someone on Intel's payroll. :p

That's a good point. The alternative would be to simply internally own the entire process, akin to the Intel compiler department.

Then they'd have best-of-breed synthesis tools for internal use and could selectively license the tools externally for individual companies that aren't in competition with Intel or are using Intel as a foundry.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
"If Intel gets big into fully synthesizable circuits then they will do for synthesis tools what they did for computational lithography, and everyone will benefit from Intel's push for solid and reliable software design tools"

could u pls explain this in english. thanks

This please!

What does that mean, in layman's terms? I assume it has nothing to do with Korg. :eek:

I don't know where to begin because I'm not really sure how familiar or unfamiliar you folks are with computer-aided layout and automated design tools?

I could try an analogy but odds are high I'll just be rebuked by a handful of "your analogies suck ass" responses...

Let see, how about if I use building a bridge as an example. As a civil engineer you might build a bridge based on specific local geography attributes (how long of a span does the bridge need to cover?), factor in the amount of peak road traffic the bridge needs to endure, and perhaps a preference for building materials based on knowledge of the local weather (concrete and climates that regularly freeze in the winter usually don't work out well, vice versa for steel structures in warm humid tropical climates, etc).

So you, as the engineer, could design said bridge the old-skool way of laying out trusses and support members in CAD or with some other "hand layout" software program, computing stresses and strains and cost, etc. Then tweaking the design by hand to reduce material usage or to improve weight-to-load carrying efficiency and so on until you were ok with the finalized design.

Very manual, very slow, very old-skool...but under the eyes of an experienced engineer the process could go quite fast (obvious false-starts and bad designs would not even be attempted) and leaves open the door of opportunity for something really innovative or new design breakthrough to transpire. (A eureka moment that leads to a new truss-hinge or some such being developed)

This is also analogous to having programming experts who program in machine language rather than programming in C+ and relying on software compilers.

The alternative (in designing the bridge) is to put your resources into hiring expert software programmers who build very smart search-optimization algorithms in which the engineer simply lists the design criteria (load, span, budget, preferred materials, etc) and presses a button in the program...the program then launches into a process of designing bridges, perhaps thousands of proto-types per second, and rank-sorts them by the engineers preferences and continues optimizing and tweaking them until either a local or global maximum/minimum is found in the genetic/AI algorithm.

Similar to software compilers as well.

That is kind of what a synthesized IC is like. Crudely, but I think the basic premise gets communicated here.

The problem right now with synthesis tools is no different than the challenges with automated bridge design or software compilers - the burden of being savvy and crafty shifts from the engineers (civil engineers for bridges, layout engineers for IC design) to the software programmers who must now figure out ways to avoid having their programs get hung up on local minima/maxima while at the same time incorporating some level of capability for capturing risk and critical flaws in the designs (which will become tangled messes of spaghetti noodles that no human will be able to debug or verify should things fail to work our properly once the bridge is built or the IC is coming out of the fab).

This is why I say a behemoth the likes of Intel, which already has a functioning and professionally polished internal process for robustly capturing the engineering care-abouts in their software (they do it for both their compilers as well as their computational lithography, not to mention TCAD) would really bring synthesis to a whole new level of capability and functionality.

Too basic? Not basic enough? Analogies suck because I included the mere mention of the possibility of a car being involved (load on the bridge)? Let me know if any of the above fleshed out the situation any better for you guys.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
So why would Intel doing it benefit other companies? Is it because the software vendors would incorporate the knowledge gained from Intel (bugs, features, etc) into the tools and make them available to others?

Couldn't Intel build their own software, or license the libraries under terms that keeps the "Intel features" out of the general code tree?

Edit: Oh, I see Tux mostly answered that question.
 
Mar 10, 2006
11,715
2,012
126
The real reason that Intel is likely doing Quark is to exact revenge on ARM for all of the years of BS about "ARM servers" killing its business, "post PC" chants, and Windows RT hype.

Quark isn't here for Intel to make money. It is here in order to start cutting off the oxygen supply to the higher volume parts of the ARM ecosystem that nobody ever thought Intel would be interested in.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
The real reason that Intel is likely doing Quark is to exact revenge on ARM for all of the years of BS about "ARM servers" killing its business, "post PC" chants, and Windows RT hype.

Quark isn't here for Intel to make money. It is here in order to start cutting off the oxygen supply to the higher volume parts of the ARM ecosystem that nobody ever thought Intel would be interested in.

Sorry that sounds rather silly,and Intel is not some charity doing things on the whims of ego. They have shareholders who want to know what they are doing with their money,ie,basically make more money not burn it in some weird vendetta, which only would come out of something on a computer forum with people having overactive fantasies. Its also silly to go ranting against ARM,as all companies(Intel included) use PR and "fighting talk" against competition,and unless you are employed or finanicially gain from them,why even care?? Only foolish people get emotionally invested into PR talk because it is meant to stir emotions. That is the psychology behind it. None of these companies really give a damn about you in reality,just the dollar/pound/euro/yuan/yen bills you give them.

Anyway its more a case Quark is another Intel embedded chip in a long line of embedded chips, they have been making for decades. It also serves another more basic function,as it means any remaining 32NM capacity is being utilised and probably 22NM capacity in the near future. All the sunk costs of their 32NM process are probably paid for now,so its a low risk product for them,and any sales are probably extra money in the kitty.

After all Intel is the biggest single supplier of embedded chips in the world. Intel producing another line of embedded chips is not a conspiracy,its just them doing what they have done for decades.
 
Last edited:

bullzz

Senior member
Jul 12, 2013
405
23
81
@Idontcare
that info was a bit too basic for me. what can Intel do anything new when huge companies like cadence and synopsys are already this
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
The real reason that Intel is likely doing Quark is to exact revenge on ARM for all of the years of BS about "ARM servers" killing its business, "post PC" chants, and Windows RT hype.

Quark isn't here for Intel to make money. It is here in order to start cutting off the oxygen supply to the higher volume parts of the ARM ecosystem that nobody ever thought Intel would be interested in.

I dunno. ARM Holdings might be one aspect of it, but Intel has created such chips for a long time. The exciting prospect of this is that all of these chips can be used in practically any device. Remember, at IDF intel described these chips as internet in everything. I think that's where they're going with these chips - 15 years ago, folks would have outright laughed at the notion that "phones" would be smart, internet connected computing devices. Same goes for watches. There are more examples I could name as well.

The long term implications of what Intel is doing here is that Intel is creating chips that allow connectivity in a wide range of products. Homes. Refrigerators. Automobiles. Everything. You think of it, and it can be done. You may laugh now, but people would have laughed at the "smart phone" concept around 1998. Nobody imagined that phones would be able to have 75Mbps transfer speeds as an internet connected device back then. These embedded chips will allow such tools to be used, in well - everything. Think of something along the lines of google glass. That is what these types of chips can do.

It's the smart business move. A scary one at the same time, maybe we'll have a "minority report" type of future ;) in the sense that you enter a store and you're recognized by a computing device. (i'm not talking about the pre-cog stuff, if anyone here has seen the movie..). I'm excited about stuff like Google glass (which is one example of what these chips can be used in), but at the same time, it's going to usher in a new era. There will be a new "ground truth" like never before. Everyone can film you on a whim with even less discretion that a smartphone provides. Kind of a weird thought. Anyway, stuff like that will be applied to every and any device, automobile, or product you can think of. Only this time, Intel won't be late to the boat like they were with mobile. They're thinking ahead of the curve, basically.

Essentially this chip allows internet connectivity and low power computing in anything and everything. That's where the future is headed, pretty much.
 
Last edited:

erunion

Senior member
Jan 20, 2013
765
0
0
Atom is the new Core, and quark is the new Atom. The future of computing always lies in smaller,cheaper, higher volume products. Intel wasn't ready for the post PC era of computing. Quark signals that Intel is already planning for the next shift into the post-smartphone era.
 
Last edited:

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
There's a difference between making an embedded chip and a microcontroller chip. Embedded by itself doesn't imply small, low power, heap, or high volume - it only means that it's part of a device that serves a fixed purpose (and as far as I'm concerned smartphones and consoles are NOT embedded).

Any chip can be useful for some embedded purpose and most of the stuff Intel has made for the embedded market is more or less the same as the stuff they've made for other markets. To the best of my knowledge they haven't really made anything new that could be considered microcontroller-class in decades. Quark looks like a move in that direction but I'd hesitate to assume it's going to compete with 8-bit microcontrollers or even Cortex-M class stuff - if we're talking "internet of things" throwing in wifi or even bluetooth automatically raises the cost and power consumption to a certain baseline.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
The real reason that Intel is likely doing Quark is to exact revenge on ARM for all of the years of BS about "ARM servers" killing its business, "post PC" chants, and Windows RT hype.

Quark isn't here for Intel to make money. It is here in order to start cutting off the oxygen supply to the higher volume parts of the ARM ecosystem that nobody ever thought Intel would be interested in.

Intel requires >60% gross margins on everything. This is very deeply ingrained in their corporate culture. They don't do loss leaders. What you describe (dumping low-end x86 parts very cheap to flood out ARM) might (or might not) be a good business plan, but it's not Intel.
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
I don't know where to begin because I'm not really sure how familiar or unfamiliar you folks are with computer-aided layout and automated design tools?

I could try an analogy but odds are high I'll just be rebuked by a handful of "your analogies suck ass" responses...

Let see, how about if I use building a bridge as an example. As a civil engineer you might build a bridge based on specific local geography attributes (how long of a span does the bridge need to cover?), factor in the amount of peak road traffic the bridge needs to endure, and perhaps a preference for building materials based on knowledge of the local weather (concrete and climates that regularly freeze in the winter usually don't work out well, vice versa for steel structures in warm humid tropical climates, etc).

So you, as the engineer, could design said bridge the old-skool way of laying out trusses and support members in CAD or with some other "hand layout" software program, computing stresses and strains and cost, etc. Then tweaking the design by hand to reduce material usage or to improve weight-to-load carrying efficiency and so on until you were ok with the finalized design.

Very manual, very slow, very old-skool...but under the eyes of an experienced engineer the process could go quite fast (obvious false-starts and bad designs would not even be attempted) and leaves open the door of opportunity for something really innovative or new design breakthrough to transpire. (A eureka moment that leads to a new truss-hinge or some such being developed)

This is also analogous to having programming experts who program in machine language rather than programming in C+ and relying on software compilers.

The alternative (in designing the bridge) is to put your resources into hiring expert software programmers who build very smart search-optimization algorithms in which the engineer simply lists the design criteria (load, span, budget, preferred materials, etc) and presses a button in the program...the program then launches into a process of designing bridges, perhaps thousands of proto-types per second, and rank-sorts them by the engineers preferences and continues optimizing and tweaking them until either a local or global maximum/minimum is found in the genetic/AI algorithm.

Similar to software compilers as well.

That is kind of what a synthesized IC is like. Crudely, but I think the basic premise gets communicated here.

The problem right now with synthesis tools is no different than the challenges with automated bridge design or software compilers - the burden of being savvy and crafty shifts from the engineers (civil engineers for bridges, layout engineers for IC design) to the software programmers who must now figure out ways to avoid having their programs get hung up on local minima/maxima while at the same time incorporating some level of capability for capturing risk and critical flaws in the designs (which will become tangled messes of spaghetti noodles that no human will be able to debug or verify should things fail to work our properly once the bridge is built or the IC is coming out of the fab).

This is why I say a behemoth the likes of Intel, which already has a functioning and professionally polished internal process for robustly capturing the engineering care-abouts in their software (they do it for both their compilers as well as their computational lithography, not to mention TCAD) would really bring synthesis to a whole new level of capability and functionality.

Too basic? Not basic enough? Analogies suck because I included the mere mention of the possibility of a car being involved (load on the bridge)? Let me know if any of the above fleshed out the situation any better for you guys.

You've nailed it, but let me just highlight something. Current state of the art synthesis software is bad, really bad, like "Holy hell how do you sell this stuff still?" bad. The reason it is this horrible is because there are so few players in the business. It is highly specialized and thus pretty darn expensive.

The common synthesis languages used haven't evolved in years (VHDL/Verilog). And their compilers have evolved even less.

Intel, on the other hand, has a software compiler, one of the best in the industry. I'm guessing that if they divert some of their software compiler writers towards a synthesis compiler, we could end up having some really good stuff.

My hope is that a good hardware language will eventually lead to much faster CPUs and other components. The thing compilers can do much better than humans is full system analysis and optimization.

Another benefit of this is that you can have two teams working on making CPUs fast. One team that makes the synthesizer fast and another that makes the code faster. Old platforms could become faster simply be upgrading the synthesizer used to make them.