Intel Quark architecture, "1/10th" the power use of Atom

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Intel released documentation on Quark, including a pretty detailed user manual:

https://communities.intel.com/servl...102-2-25117/Intel Quark Core HWRefMan_001.pdf

Initial reaction to reading the featureset made me think "that sounds a lot like a 486." Then a saw the block diagram on page on page 20 (figure 3). Compare with here:

http://intel-vintage-developer.eu5.org/DESIGN/INTARCH/PIX/272713_1.GIF

I think it's fair to say this is basically a 486 with some revisions, like double the L1 cache (and apparently they support write back instead of the traditional write through but only in some versions of the processor). The people who are saying this is like a Pentium 3 are way off.

There's no L2 cache either, but there is 512KB of on-chip SRAM. So it'll be good for embedded stuff that can make good use of the SRAM, but not so good for random Linux programs that don't. The lack of L2 is probably going to make this perform a lot worse than 8x what a 50MHz 486 would have, even if the DRAM access is probably lower latency than what it'd be for the old 486 boxes.
 

Nothingness

Diamond Member
Jul 3, 2013
3,309
2,382
136
Intel should stop putting dumbed down x86 everywhere and start being creative. This is starting to make so little sense, this being x86 brings basically nothing. Welcome back to the early 90s!
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Intel should stop putting dumbed down x86 everywhere and start being creative. This is starting to make so little sense, this being x86 brings basically nothing. Welcome back to the early 90s!

People balked at how long Intel used the original Atom uarchs without much modification. I wonder if there'll be any balking at them using a slight enhancement of a 24 year old uarch.

Here are some more documents:
https://communities.intel.com/servl...6-102-2-25118/Intel Quark Core_DevMan_001.pdf
https://communities.intel.com/servl...y/21828-102-2-25120/329676_QuarkDatasheet.pdf

This design is old, no matter how you look at it. There's a reason why no one uses a shared L1 cache anymore - needing a 128-bit per clock instruction interface to a 32-byte queue just to avoid conflicts between data and instructions is a bad choice for this power level.

They didn't even update the FPU. It's unpipelined (and thus slow) like on 486. In addition to being x87 which sucks away additional performance - the vital fxchg instruction is 4 cycles.

I just don't see where this fits in. For small embedded stuff it sucks because there's no on-chip flash, and the bit-banging capability is extremely limited. For real time it sucks because it relies on caching for decent performance - you can turn it off but then you only get 16KB for both code and data which is too little for very much. The 512KB SRAM is surely not as fast as L1 since it at least has to go over the 486's bus. It sucks at DSP because the FPU is slow (fadd is average 10 cycles and fmul 11 cycles! Compare with Cortex-M4 which has single-cycle SP FPU with a normal instruction set) and integer multiplications are relatively slow too (6 cycles imul vs 1 cycle mla on Cortex-M4). As an applications processor it sucks because it needs L2 cache at its clock speed, the same way the original iPhone was so slow compared to iPhone 3GS - note that it's further hampered by only having a 16-bit external data bus. In an utterly bizarre move they made the memory controller compatible with DDR3 - is that really what you want to use with this? 16-bit DDR3?

The x86 bang for your buck in code density is totally not worth it, ARMv7m gives you comparable density while giving you way more power for what you can do in one cycle.

The clock speed, 400MHz, is high vs typical microcontrollers thanks to being on a much smaller process than a lot of them (but if you look at Cortex-M4 embedded in OMAP5 for instance, the story is quite different). But like I expected, you pay the price with a total lack of mixed signal peripherals - no ADCs or DACs. Which makes it useless for yet more microcontroller applications.

I'm just not seeing the big draw here. Intel is bringing nothing new or interesting to the table, on the contrary something very old - and x86 in what could be its least relevant placement yet. I know they're pushing big on this "Internet of Things" angle but I don't see where their advantage is here.
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
but there is 512KB of on-chip SRAM.

I don't see where their advantage is here.

The combination of the huge built in ram (512KB) and 486 capabilities means that it will run DOS, just great.

Then next year, exactly 12 months after the above release, they can re-release the Intel 4004, keeping its original 100KHz or so clock speed and 4 bit size, to make sure it does not compete with their more lucrative server chips.

Following on from that, in another 12 months, they can re-release a 7400 Quad Nand gate chip.

-----------------------------

EDIT: I've looked into this further. The chip(s) actually do seem to have some merit, my comments were relying too much on ONLY what was said in this thread.

Some embedded (and similar) applications would find the combination of a microprocessor (of modest ability), combined with the ability to use their own IP hardware functionality on the chip, built on a very modern process. Giving great power efficiency and operating frequency benefits, could actually have a large market segment, in some markets.
 
Last edited:

Khato

Golden Member
Jul 15, 2001
1,288
367
136
I doubt that even Intel believes they have any advantage for the intended markets with Quark, save for price possibly. Rather this seems like a proof of concept type product where the intention was to get something out the door to cover the bases. I actually wonder if Quark was more than just a research project half a year ago - this doesn't seem like the kind of product that Otellini would have had any interest in pursuing.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
I doubt that even Intel believes they have any advantage for the intended markets with Quark, save for price possibly. Rather this seems like a proof of concept type product where the intention was to get something out the door to cover the bases. I actually wonder if Quark was more than just a research project half a year ago - this doesn't seem like the kind of product that Otellini would have had any interest in pursuing.

You're probably right. It's just annoying that they're hyping this and people are like usual tripping over themselves to praise Intel. All the bluster about the size and power consumption vs Silvermont means nothing without looking at performance.
 

Khato

Golden Member
Jul 15, 2001
1,288
367
136
You're probably right. It's just annoying that they're hyping this and people are like usual tripping over themselves to praise Intel. All the bluster about the size and power consumption vs Silvermont means nothing without looking at performance.

Haha, agree completely with that! It's definitely interesting to see that Intel's intending to get into the market, but this initial entry looks to be far less 'exciting' than the original Atom. Well, at least in terms of the CPU core. The SoC portion doesn't seem too bad - sure there are a number of other useful additions that could be made for the intended markets, but the basics are reasonable.

Real question is whether it's just going to sit there for five years or not. If Intel actually designs a small simple core for the next iteration then we have something to talk about. Something like that could even be 1-1.5 years or so away given how much easier such a core would be to design compared to Atom/Core.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Well, whatever the case, I'm confident that they do need to design that new core to make a useful product. And that will be a non-negligible investment, even if not a huge one.

But ignoring the core, the more I look at this thing the more ridiculous it seems. According to the data sheet it ONLY supports DDR3. It needs separate 1V, 1.05V, 1.5V, 1.8V, and 3.3V supplies - contrast with the fact that some microcontrollers are coming with integrated regulators and can run straight off of a battery supply. But the ones that don't at least don't come with all these rails.

I'm also struggling to find any indication that you can clock the core or DDR3 at anything other than 400MHz, but I don't want to rule it out until I'm positive.

This thing seriously looks close to a product that could have been out 20 years ago, minus the big SRAM. If another company released something like this (minus x86 compatibility) it'd get zero positive attention.
 
Last edited:

sm625

Diamond Member
May 6, 2011
8,172
137
106
God I hope these dont end up in anything running windows and on the shelves of Best Buy. Please not another netbook repeat.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
God I hope these dont end up in anything running windows and on the shelves of Best Buy. Please not another netbook repeat.

I don't think you have to worry about that happening, this thing has no display controller so it would need a discrete GPU. It'd be more expensive than a low end Atom netbook and dramatically less powerful. Assuming running Windows is even really possible.
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
You've nailed it, but let me just highlight something. Current state of the art synthesis software is bad, really bad, like "Holy hell how do you sell this stuff still?" bad. The reason it is this horrible is because there are so few players in the business. It is highly specialized and thus pretty darn expensive.

I've been using Design Compiler a lot, and I think it's pretty amazing.. I'm pretty blown away by it actually - even if I have to hand-hold it occasionally and it sometimes does some incredibly dumb things, overall it's really impressive to me.

The common synthesis languages used haven't evolved in years (VHDL/Verilog). And their compilers have evolved even less.
I disagree a bit with this too. System Verilog is supported by tools and it's pretty impressive... and it's new(ish). The new UPF format for power is pretty cool and also has pretty broad industry support.
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
Why is my coffee Blue? *dies*

You have to shut the glass 'window' at the top of the coffee maker, push the rubber 'boot' at the bottom (labelled 'reboot'), and put a packet of anti-BSOD in it.
Also try updating to a packet of 'Brand Included Onion Sugar', called B.I.O.S.
 

bononos

Diamond Member
Aug 21, 2011
3,938
190
106
How does the Galileo compare with the Rasperry Pi? And does this Galileo thing have the same analog input capability?
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
I moved Cogman's interesting post about HDL and Synthesis to Highly Technical because as moderator I often tell people to stay on-topic, and I hate being a hypocrite by pulling threads off-topic myself.

So, if you are interested in a discussion of HDL/RTL langauges vs. programming languages and potential enhancements to Synthesis flows... we are discussing that here:
http://forums.anandtech.com/showthread.php?t=2345870
 

Nothingness

Diamond Member
Jul 3, 2013
3,309
2,382
136
Intel released documentation on Quark, including a pretty detailed user manual:

https://communities.intel.com/servl...102-2-25117/Intel Quark Core HWRefMan_001.pdf
Funny to see the pipe stages:
- instruction fetch
- stage 1 decode
- stage 2 decode
- execution
- register write-back
Out of 5 stages, two are required for decoding. Let's see what people have to say about the x86 tax on small chips :biggrin:

Note this is indeed the 486 pipeline as described in the John Crawford article "The Execution Pipeline of the Intel i486 CPU", published in 1990. It's funny to see how some comments about RISC in that article made their way into Quark core manual.
 

StrangerGuy

Diamond Member
May 9, 2004
8,443
124
106
Well, whatever the case, I'm confident that they do need to design that new core to make a useful product. And that will be a non-negligible investment, even if not a huge one.

But ignoring the core, the more I look at this thing the more ridiculous it seems. According to the data sheet it ONLY supports DDR3. It needs separate 1V, 1.05V, 1.5V, 1.8V, and 3.3V supplies - contrast with the fact that some microcontrollers are coming with integrated regulators and can run straight off of a battery supply. But the ones that don't at least don't come with all these rails.

I'm also struggling to find any indication that you can clock the core or DDR3 at anything other than 400MHz, but I don't want to rule it out until I'm positive.

This thing seriously looks close to a product that could have been out 20 years ago, minus the big SRAM. If another company released something like this (minus x86 compatibility) it'd get zero positive attention.

The entire concept screams ARM envy, except without the business model.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
Can someone explain to me, what the purpose is? And what the vision is for this?

Here is my take on it:

As it is, it makes good sense to me.
As i understand it; this is to cut down cost for development and improving time to market for the end product. Provide what only a big player like Intel can do:

Solid tools
Solid documentation
Solid sales and support
And tried tech. as a platform (albeit as i understand it stone age - but that can be improved)
At one point entry for the customer.

Makes absolutely sense to me.

I have seen so much wasted ressources on programming - that needed to be finished absolutely now every time- so only little could be reused. lalala project management that just takes time on coordinating. Documentation that was far worse than bad because management dont give a crap - (btw like many engineers/programmers). Tons of overhead and expanded development times. Using the best specialist inventing the wheel over and over. Man. If this can someone help situations like that its a huge step forward for the business, especially for time to market, and using the ressources better.

What i dont understand is. Intel have been in embedded always. Now they made it highly synt - or what?. Why did it take them so long to get there? I mean what do i not understand here?
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
What i dont understand is. Intel have been in embedded always. Now they made it highly synt - or what?. Why did it take them so long to get there? I mean what do i not understand here?

Go home krumme, I'm drunk.

No seriously, LOL, what do you mean here. I doubt I have the answer to your question but I'm sure I have an opinion on it (;)), but what is your question exactly?

"what do i not understand here?"

You got me there!
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
Go home krumme, I'm drunk.

No seriously, LOL, what do you mean here. I doubt I have the answer to your question but I'm sure I have an opinion on it (;)), but what is your question exactly?

"what do i not understand here?"

You got me there!

Yeaa that was a <edited out profranity>up question lol. What i dont know is, if i interpreted this situation right?

Because all this technical talk is way over my head eg. Quality of verilog. And Intels presentation doesnt make sense for me. Technical talk without any business explanation. Tiny, puny synth soc and so...? :)


No profanity allowed here
Markfw900
 
Last edited by a moderator: