Xbit Labs: AMD-> Improvements of Next-Generation Process Technologies Start to Wane.

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
I simply showed you that a reputable source showed power figures very similar to mines
No, I linked to a review and said that they didn't state the power supply. You corrected me and said
That is wrong. It is not the same article and likely not the same power supply.

The article you list shows a constant draw of 2W, regardless of capacity. That's due to the chips themselves using similar power regardless of whether they're 1Gbit or 2Gbit. You don't think that you might save a couple watts if you go from 16x2Gb chips on one 4GB DIMM or worse yet 32x1Gb chips on 2x2GB DIMMS to 4x2GB chips soldered beside the CPU?

I used SD card on both systems.
And I am not referencing your numbers in my comments, I'm referring to the AT numbers. Without knowing about the system you spec'd, I can't comment on it at all. The AT review I listed showed 32.2W draw at the outlet running Cinebench.

Why? can you mount ARM Gpu on x86 and viceversa?

If not so, then GPU is strictly related to CPU platform.
I don't see any reason why you couldn't interface an ARM CPU to a desktop graphics solution. There are even ARM dev kits with PCI-E slots, so dev time wouldn't even be that bad.

I think newegg disagree with you. Moreover you cant buy ARM CPU without the board.
You're kidding, right? I've integrated ARM uPs and uCs into dozens of designs. They're easily available by themselves, single quantity, next day shipping.
http://search.digikey.com/us/en/products/AM3703CBP/296-28404-ND/2596753
That also has no GPU BTW- if you want a media processor you're free to mate it whatever you like.

Following this reasoning, we should say that integrating the GPU in the CPU is a bad move.

Another example: if you need wifi, and cpu gives you wifi for free, then you're going to save power. Aren't you?
No, that say nothing about whether it's a good idea to integrate the GPU onto the die.

You seem to be missing the entire point. Your comments on this thread are speaking from a position of authority on the ARCHITECTURE of X86 vs ARM. You talked about how the X86 decode causes a power penalty and how the X86 architecture is outdated. You then proceeded to back it up with an anecdote about a SYSTEM you spec'd for a client, showing how power hungry and costly X86 is compared to ARM.

My contention with that is that it's completely meaningless. Since I highly doubt you sourced E350s and designed an embedded system around it, I'm going to go out on a limb and guess that you chose a mITX motherboard, which is an entirely different class of system than the Origen. If you want to look at just the integer performance of the Exynos vs E350, that's fine, but it's useless for comparing ISAs or power draw. Even within a segment as similar as desktop processors you can get wildly different performance/watt, performance/area and raw performance between BD and SB that changes with each different benchmark, and those have the same ISA. Trying to compare the efficiency and cost of two different ISAs by looking at prebuilt boards that serve entirely different market segments looking at a single metric?

This is obviously going nowhere, so I'll bow out here. I actually don't disagree that X86 isn't likely to push into the <5W space, but the argument you presented to support ARM is very weak and I don't think further discussion will change your mind on it.
 

ncalipari

Senior member
Apr 1, 2009
255
0
0
I believe his point was that setting up a straw-man argument doesn't really help anyone to understand the issue at hand, or even to understand your point of view and how you reached it. Just in case you didn't notice, this was also the point of various other posts on this page.

I was ironic, very much like steve jobs did with his very short replies.

Sorry, I really don't mean to offend, but... Could you possibly post a different example, one that's less contrived, and use numbers from that instead? For a start, I'd recommend comparing specific (please name them) devices of the same form factor, since screens are rather power-hungry.

I was comparing two systems available on the market, without any display, both running linux from console.

Once you come up with new numbers, could you help explain to us why these current performance/power necessarily imply inherent flaws in necessary commonalities of x86 CPUs? I can't really follow some of your leaps:

To oversimplify the matter, we can see 2 kinds of architectures: RISC, which can only handle simple operations like ADDs, and CISC which handles only more complex operations, like MULTIPLY.

The history showed us that RISC CPUs are much better, as they scale up better with a smaller power envelope.

x86 is CISC. It can't become a RISC platform or it would have to drop all the legacy compatibility, which is what keeps x86 afloat, and in 1994 it became evident that it couldn't scale up further while being CISC.

The solution intel picked was to make a RISC CPU, with a CISC-to-RISC decoding engine. This way it could scale up well up to today, with a decent power envelope and keeping legacy compatibility, at the cost of loosing efficiency in the lower power zone.

This approach made x86 how we know it today.

In case you don't realize why these statements seem confusing to the rest of us, here are some questions they raise:

Power gating is a feature that's also used in some x86 CPUs. Is the power gating seen in those cases inherintly inferior to what's seen in ARM CPUs?

yes

If so, in what ways is it inferior and why is this the case?

Simply put, Intel hasn't invested as much as ARM did on this. There is not has much pressure on Intel to save power has there has been on ARM.

Intel was pressured for power, ARM for efficiency. Can we call it natural selection?

Why can't implementation seen in ARM CPUs be adapted for use in x86 CPUs?

Another reason is that the simplest way to gain the power gating capability of ARM would be to drop legacy support. Which is not viable for Intel.

What is this theoretical lower limit that is so cripplingly large that power-efficient x86 GPUs are impossible to make?

Maybe you meant CPUs?

x86 can be efficient, in the 10-100W power envelope. Below, and possibly above, ARM is much more efficient.

You've said that the theoretical lower limit for decoder power consumption is "VERY HIGH". This implies that you know an actual value for this lower limit; what is it? And how was this number reached?

Yes we can estimate two limits.

One is much simpler to compute, but grossly underestimating the real limit, is computed by relating information to entropy. According to the Turing axioms and digital physics, you need to spend a minimal amount of energy to process information.

Decoding is a very expensive step because you are doing a very information-intensive process, with no computational gain (you are digesting rather than processing information).

A limit closer to the real limit can be computed factoring the theoretical limit of decoding informations, and compute the lower power limit as a function of the number of informations computed per second. The faster the CPU, the greedier the decode step.

An even closer limit would account for technological limits, but that's beyond the scope of this post.

If you are interested I would suggest you to read this paper:

http://dl.acm.org/citation.cfm?id=592648.592673&coll=DL&dl=GUIDE&CFID=59033536&CFTOKEN=44306525
 

Blue Shift

Senior member
Feb 13, 2010
272
0
76
I was really hoping that you would re-do your evaluation with more realistic hardware, or start explaining your opinions and assertions instead of presenting them as fact. Instead, you have answered nothing.

I was ironic, very much like steve jobs did with his very short replies.
Irony and sarcasm do not work on the internet.
I was comparing two systems available on the market, without any display, both running linux from console.
Again, which systems did you compare and why are they representative?

To oversimplify the matter, we can see 2 kinds of architectures: RISC, which can only handle simple operations like ADDs, and CISC which handles only more complex operations, like MULTIPLY. Simply put, Intel hasn't invested as much as ARM did on this. There is not has much pressure on Intel to save power has there has been on ARM.

Intel was pressured for power, ARM for efficiency. Can we call it natural selection?

Another reason is that the simplest way to gain the power gating capability of ARM would be to drop legacy support. Which is not viable for Intel.
None of those are reasons why power gating is inherently inferior for x86 CPUs. You've said something fuzzy about power gating precluding legacy support, and a bunch of general stuff we all know about RISC.

Maybe you meant CPUs?

x86 can be efficient, in the 10-100W power envelope. Below, and possibly above, ARM is much more efficient.
This is your opinion, which you have presented many times. Again, you present no conclusive evidence to this effect.

Yes we can estimate two limits.

One is much simpler to compute, but grossly underestimating the real limit, is computed by relating information to entropy. According to the Turing axioms and digital physics, you need to spend a minimal amount of energy to process information.

Decoding is a very expensive step because you are doing a very information-intensive process, with no computational gain (you are digesting rather than processing information).

A limit closer to the real limit can be computed factoring the theoretical limit of decoding informations, and compute the lower power limit as a function of the number of informations computed per second. The faster the CPU, the greedier the decode step.

An even closer limit would account for technological limits, but that's beyond the scope of this post.

If you are interested I would suggest you to read this paper:

http://dl.acm.org/citation.cfm?id=592648.592673&coll=DL&dl=GUIDE&CFID=59033536&CFTOKEN=44306525

There isn't a number buried in there anywhere. What are these limits that you have estimated?
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I was ironic, very much like steve jobs did with his very short replies.



I was comparing two systems available on the market, without any display, both running linux from console.



To oversimplify the matter, we can see 2 kinds of architectures: RISC, which can only handle simple operations like ADDs, and CISC which handles only more complex operations, like MULTIPLY.

The history showed us that RISC CPUs are much better, as they scale up better with a smaller power envelope.

x86 is CISC. It can't become a RISC platform or it would have to drop all the legacy compatibility, which is what keeps x86 afloat, and in 1994 it became evident that it couldn't scale up further while being CISC.

The solution intel picked was to make a RISC CPU, with a CISC-to-RISC decoding engine. This way it could scale up well up to today, with a decent power envelope and keeping legacy compatibility, at the cost of loosing efficiency in the lower power zone.

This approach made x86 how we know it today.



yes



Simply put, Intel hasn't invested as much as ARM did on this. There is not has much pressure on Intel to save power has there has been on ARM.

Intel was pressured for power, ARM for efficiency. Can we call it natural selection?



Another reason is that the simplest way to gain the power gating capability of ARM would be to drop legacy support. Which is not viable for Intel.



Maybe you meant CPUs?

x86 can be efficient, in the 10-100W power envelope. Below, and possibly above, ARM is much more efficient.



Yes we can estimate two limits.

One is much simpler to compute, but grossly underestimating the real limit, is computed by relating information to entropy. According to the Turing axioms and digital physics, you need to spend a minimal amount of energy to process information.

Decoding is a very expensive step because you are doing a very information-intensive process, with no computational gain (you are digesting rather than processing information).

A limit closer to the real limit can be computed factoring the theoretical limit of decoding informations, and compute the lower power limit as a function of the number of informations computed per second. The faster the CPU, the greedier the decode step.

An even closer limit would account for technological limits, but that's beyond the scope of this post.

If you are interested I would suggest you to read this paper:

http://dl.acm.org/citation.cfm?id=592648.592673&coll=DL&dl=GUIDE&CFID=59033536&CFTOKEN=44306525

:hmm:

Hmmm, I'm starting to notice you tend to state your opinion as if it were fact, which is fine if you can back it up when asked to, but you have been repeatedly asked to do so and now its just getting sad.

Do you have any specific data, facts, numbers to support your myriad of assertions in this thread? You do know the difference between baseless opinion and supported facts, yes?

Up until now I have been giving you the benefit of the doubt as anyone in the industry has to talk with a certain level of vagueness when posting publicly, but your vagueness goes beyond stuff that would need to be vague for purposes of non-disclosure and extends well into the regime most would consider to be simply "talking out your a*s".

This is the time for you to establish your credibility in this community. Judging by the direction this thread has taken, I strongly guide you to start putting up some real facts, actual data, etc as you are at risk of developing the sort of credibility here that is generally reserved for trolls.

We want you to be the real deal, we all benefit as bonafide industry insiders come here, there are many of us here already, but we don't need wannabe's walking around attempting to talk the talk but can't walk the walk.

You've been asked to post up some info on the straight and narrow. You risk much at this point by continuing to post in a manner that communicates a message that you cannot post info on the straight and narrow, which will lead the community to conclude this is because you simply have none to post.
 

ncalipari

Senior member
Apr 1, 2009
255
0
0
Irony and sarcasm do not work on the internet.
Again, which systems did you compare and why are they representative?

I compared systems available on the market at that moment, or shortly after. I think they were much more representative than non available models. What do you think?

You've said something fuzzy about power gating precluding legacy support

it's very simple, nothing fuzzy about: drop legacy, along with all these stupid instructions, and you get much higher power efficiency.

Am I the only one to remember the born of MMX?

and a bunch of general stuff we all know about RISC.

So if you know, why you still defend the fact that a given production process x86 can be more efficient than ARM?

This is your opinion, which you have presented many times. Again, you present no conclusive evidence to this effect.

well, you just need to make a bit of math

decoder + CPU > CPU

where the order criterion is power consumption.

There isn't a number buried in there anywhere.

You need functions, not number. P.s.: I guess you didn't read the paper, did you?
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Then you know you've wasted your time with that post.

Not hardly. You wouldn't be the first to act like you are an insider by claiming to be hiding behind a supposed NDA.

There were/are plenty of questions posed to you in this thread regarding claims you made which cover material that would not be under NDA.

So far you've talked a lot but really have avoided actually proving out much of anything from your claims.
 

ncalipari

Senior member
Apr 1, 2009
255
0
0
So far you've talked a lot but really have avoided actually proving out much of anything from your claims.

Like what? that there is a theoretical lower limit to power needed to process information?

And that that limit is a function of entropy of the informations?


You learn that in college. I'm not paid to teach CS to non graduated people.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
Like what? that there is a theoretical lower limit to power needed to process information?

And that that limit is a function of entropy of the informations?


You learn that in college. I'm not paid to teach CS to non graduated people.

Cheers mate and merry x mas ! - you can put on your clothing now, and sit down and relax with us :)
 

Soccerman06

Diamond Member
Jul 29, 2004
5,830
5
81
Like what? that there is a theoretical lower limit to power needed to process information?

And that that limit is a function of entropy of the informations?


You learn that in college. I'm not paid to teach CS to non graduated people.

What does counter strike have to do with teaching graduate students... :hmm:
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Excuse me, did I crap in your wheaties or something?

I have no idea what you are going on about in terms of how it relates to me specifically but member callouts do violate the posting guidelines.

Got yourself a live one on the line . I doubt it a marlin more likely a bull shark. Its not like we haven't discussed the X86 decoders befor. Ya they be power hunger. The real question here is when Intel dumps them . Ever member here knows they are power hungry . Soon enough Intel will emulate x86 and decoders will begone . According to what I am reading here . LOL! Intel only needs to lose the decoders and there in the Money. Funny how I been debating Intel is going to cut the x86 decoders and this guy thinks its the reason intel will fail . LOL.
 

bononos

Diamond Member
Aug 21, 2011
3,938
190
106
Might be sidetracking a bit here but aren't x86 decoders only a very tiny bit of the chip real estate? Why is it considered power hungry?
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Might be sidetracking a bit here but aren't x86 decoders only a very tiny bit of the chip real estate? Why is it considered power hungry?

It's not. The AMD Geode x86 chips were sold as sub 1W chips for years and exist in a variety of embedded products.
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
I have a hard time believing that the pert/watt impact for the decoder would be greater than the impact from emulating x86 in software...
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Have you ever used one of those geode CPU?

I guess no, otherwise you wouldn't comment them

I have, what's your point? It's an x86 cpu that's existed in the last decade, hitting power levels you claim are impossible. BTW, Intel just released some new dual core Atoms that consume less than 3W for the cpu...
 

ncalipari

Senior member
Apr 1, 2009
255
0
0
How to win friends

do you go looking for friends on a public forum? Sound sketchy to me.

and influence people.

Never tried to influence anyone. I guess everyone is smart enough to have opinions. I prefer to speak about facts.

I have, what's your point? It's an x86 cpu that's existed in the last decade,

Which OS have you been using on that Geode Machine? is it still working?

hitting power levels you claim are impossible.

where exactly did I said that?

BTW, Intel just released some new dual core Atoms that consume less than 3W for the cpu...

link?
 
Last edited: