"RAM power requirements measured a constant 2W per DIMM"
So yes, dropping from 4GB to 1GB would save power as he said. It looks really bad when you get owned by your own links BTW.
http://en.wikipedia.org/wiki/DIMM
"RAM power requirements measured a constant 2W per DIMM"
So yes, dropping from 4GB to 1GB would save power as he said. It looks really bad when you get owned by your own links BTW.
No, I linked to a review and said that they didn't state the power supply. You corrected me and saidI simply showed you that a reputable source showed power figures very similar to mines
That is wrong. It is not the same article and likely not the same power supply.
The article you list shows a constant draw of 2W, regardless of capacity. That's due to the chips themselves using similar power regardless of whether they're 1Gbit or 2Gbit. You don't think that you might save a couple watts if you go from 16x2Gb chips on one 4GB DIMM or worse yet 32x1Gb chips on 2x2GB DIMMS to 4x2GB chips soldered beside the CPU?
And I am not referencing your numbers in my comments, I'm referring to the AT numbers. Without knowing about the system you spec'd, I can't comment on it at all. The AT review I listed showed 32.2W draw at the outlet running Cinebench.I used SD card on both systems.
I don't see any reason why you couldn't interface an ARM CPU to a desktop graphics solution. There are even ARM dev kits with PCI-E slots, so dev time wouldn't even be that bad.Why? can you mount ARM Gpu on x86 and viceversa?
If not so, then GPU is strictly related to CPU platform.
You're kidding, right? I've integrated ARM uPs and uCs into dozens of designs. They're easily available by themselves, single quantity, next day shipping.I think newegg disagree with you. Moreover you cant buy ARM CPU without the board.
No, that say nothing about whether it's a good idea to integrate the GPU onto the die.Following this reasoning, we should say that integrating the GPU in the CPU is a bad move.
Another example: if you need wifi, and cpu gives you wifi for free, then you're going to save power. Aren't you?
I believe his point was that setting up a straw-man argument doesn't really help anyone to understand the issue at hand, or even to understand your point of view and how you reached it. Just in case you didn't notice, this was also the point of various other posts on this page.
Sorry, I really don't mean to offend, but... Could you possibly post a different example, one that's less contrived, and use numbers from that instead? For a start, I'd recommend comparing specific (please name them) devices of the same form factor, since screens are rather power-hungry.
Once you come up with new numbers, could you help explain to us why these current performance/power necessarily imply inherent flaws in necessary commonalities of x86 CPUs? I can't really follow some of your leaps:
In case you don't realize why these statements seem confusing to the rest of us, here are some questions they raise:
Power gating is a feature that's also used in some x86 CPUs. Is the power gating seen in those cases inherintly inferior to what's seen in ARM CPUs?
If so, in what ways is it inferior and why is this the case?
Why can't implementation seen in ARM CPUs be adapted for use in x86 CPUs?
What is this theoretical lower limit that is so cripplingly large that power-efficient x86 GPUs are impossible to make?
You've said that the theoretical lower limit for decoder power consumption is "VERY HIGH". This implies that you know an actual value for this lower limit; what is it? And how was this number reached?
Irony and sarcasm do not work on the internet.I was ironic, very much like steve jobs did with his very short replies.
Again, which systems did you compare and why are they representative?I was comparing two systems available on the market, without any display, both running linux from console.
None of those are reasons why power gating is inherently inferior for x86 CPUs. You've said something fuzzy about power gating precluding legacy support, and a bunch of general stuff we all know about RISC.To oversimplify the matter, we can see 2 kinds of architectures: RISC, which can only handle simple operations like ADDs, and CISC which handles only more complex operations, like MULTIPLY. Simply put, Intel hasn't invested as much as ARM did on this. There is not has much pressure on Intel to save power has there has been on ARM.
Intel was pressured for power, ARM for efficiency. Can we call it natural selection?
Another reason is that the simplest way to gain the power gating capability of ARM would be to drop legacy support. Which is not viable for Intel.
This is your opinion, which you have presented many times. Again, you present no conclusive evidence to this effect.Maybe you meant CPUs?
x86 can be efficient, in the 10-100W power envelope. Below, and possibly above, ARM is much more efficient.
Yes we can estimate two limits.
One is much simpler to compute, but grossly underestimating the real limit, is computed by relating information to entropy. According to the Turing axioms and digital physics, you need to spend a minimal amount of energy to process information.
Decoding is a very expensive step because you are doing a very information-intensive process, with no computational gain (you are digesting rather than processing information).
A limit closer to the real limit can be computed factoring the theoretical limit of decoding informations, and compute the lower power limit as a function of the number of informations computed per second. The faster the CPU, the greedier the decode step.
An even closer limit would account for technological limits, but that's beyond the scope of this post.
If you are interested I would suggest you to read this paper:
http://dl.acm.org/citation.cfm?id=592648.592673&coll=DL&dl=GUIDE&CFID=59033536&CFTOKEN=44306525
I was ironic, very much like steve jobs did with his very short replies.
I was comparing two systems available on the market, without any display, both running linux from console.
To oversimplify the matter, we can see 2 kinds of architectures: RISC, which can only handle simple operations like ADDs, and CISC which handles only more complex operations, like MULTIPLY.
The history showed us that RISC CPUs are much better, as they scale up better with a smaller power envelope.
x86 is CISC. It can't become a RISC platform or it would have to drop all the legacy compatibility, which is what keeps x86 afloat, and in 1994 it became evident that it couldn't scale up further while being CISC.
The solution intel picked was to make a RISC CPU, with a CISC-to-RISC decoding engine. This way it could scale up well up to today, with a decent power envelope and keeping legacy compatibility, at the cost of loosing efficiency in the lower power zone.
This approach made x86 how we know it today.
yes
Simply put, Intel hasn't invested as much as ARM did on this. There is not has much pressure on Intel to save power has there has been on ARM.
Intel was pressured for power, ARM for efficiency. Can we call it natural selection?
Another reason is that the simplest way to gain the power gating capability of ARM would be to drop legacy support. Which is not viable for Intel.
Maybe you meant CPUs?
x86 can be efficient, in the 10-100W power envelope. Below, and possibly above, ARM is much more efficient.
Yes we can estimate two limits.
One is much simpler to compute, but grossly underestimating the real limit, is computed by relating information to entropy. According to the Turing axioms and digital physics, you need to spend a minimal amount of energy to process information.
Decoding is a very expensive step because you are doing a very information-intensive process, with no computational gain (you are digesting rather than processing information).
A limit closer to the real limit can be computed factoring the theoretical limit of decoding informations, and compute the lower power limit as a function of the number of informations computed per second. The faster the CPU, the greedier the decode step.
An even closer limit would account for technological limits, but that's beyond the scope of this post.
If you are interested I would suggest you to read this paper:
http://dl.acm.org/citation.cfm?id=592648.592673&coll=DL&dl=GUIDE&CFID=59033536&CFTOKEN=44306525
anyone in the industry has to talk with a certain level of vagueness when posting publicly,
Irony and sarcasm do not work on the internet.
Again, which systems did you compare and why are they representative?
You've said something fuzzy about power gating precluding legacy support
and a bunch of general stuff we all know about RISC.
This is your opinion, which you have presented many times. Again, you present no conclusive evidence to this effect.
There isn't a number buried in there anywhere.
Then you know you've wasted your time with that post.
So far you've talked a lot but really have avoided actually proving out much of anything from your claims.
Like what? that there is a theoretical lower limit to power needed to process information?
And that that limit is a function of entropy of the informations?
You learn that in college. I'm not paid to teach CS to non graduated people.
Like what? that there is a theoretical lower limit to power needed to process information?
And that that limit is a function of entropy of the informations?
You learn that in college. I'm not paid to teach CS to non graduated people.
Excuse me, did I crap in your wheaties or something?
I have no idea what you are going on about in terms of how it relates to me specifically but member callouts do violate the posting guidelines.
Might be sidetracking a bit here but aren't x86 decoders only a very tiny bit of the chip real estate? Why is it considered power hungry?
tweakboy is an upstanding citizen very important very interested in doing only what is very best for these forums and their members you would do well to pay him respect![]()
It's not. The AMD Geode x86 chips were sold as sub 1W chips for years and exist in a variety of embedded products.
The real question here is when Intel dumps them .
Have you ever used one of those geode CPU?
I guess no, otherwise you wouldn't comment them
Have you ever used one of those geode CPU?
I guess no, otherwise you wouldn't comment them
How to win friends
and influence people.
I have, what's your point? It's an x86 cpu that's existed in the last decade,
hitting power levels you claim are impossible.
BTW, Intel just released some new dual core Atoms that consume less than 3W for the cpu...
I prefer to speak about facts.
Nice that speak about them, because you sure don't post anything about them.
you can't afford to have me to the work for you. Sorry buddy.
you can't afford to have me to the work for you. Sorry buddy.