• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD claims 25x energy efficiency improvement in the next years

witeken

Diamond Member
AMD claims it can deliver 25x energy efficiency improvement in the next six years

AMD-Trend.png


I guess AMD really has some magic tricks. Or do they...

First things first — AMD isn’t trying to reduce power consumption by 25x in a no-holds-barred scenario, and it’s not trying to cut idle power by 25x either. Its goal is to reduce the power consumption of “typical” workloads by a factor of 25 — and that’s somewhat more achievable. According to AMD, GPU acceleration and HSA are just the beginning — long term the company wants to explore using new specialized cores for particular workloads.

So it's sort of like what they did with their performance/TDP with Puma.

AMD’s central argument is that a combination of smart scheduling, intelligent throttling, fine-grained power management, and specialized heterogeneous cores can deliver an efficiency improvement that’s far larger than what we’ll see from smaller process nodes over the same time frame.

We'll see.
 
AMD’s central argument is that a combination of smart scheduling, intelligent throttling, fine-grained power management, and specialized heterogeneous cores can deliver an efficiency improvement that’s far larger than what we’ll see from smaller process nodes over the same time frame.

i was idly wondering the other day if the amount of xtors that have to be thrown at a general purpose processor to increase speed a fraction of a percent would be better spent on specialized hardware for specific, but somewhat common, problems. that's basically the approach taken in the phone SoC world. seems that's where AMD is headed.
 
How did they get from the blue line (10000x) in 2009 to the green line (~500,000x) in ~2012? Are they saying that they already achieved 50x efficiency in 3 years with some invisible line in between?
 
By "typical workload" do they mean "majority of time, the CPU is sitting around doing nothing and so that's our typical workload". Then I can see how voltage regulators and voltage/frequency scaling can help a ton. BUT if they mean common CPU intensive workloads, it has to come from workloads the benefit from accelerators/GPU. I can't imagine how you can achieve anywhere close to 25x efficiency (perf/power) without it.
 
AMD has never had a shortage of "plans". Am looking forward to being inundated with a bevy of "executions" on said plans, including that of the thread's title.
 
AMD has never had a shortage of "plans". Am looking forward to being inundated with a bevy of "executions" on said plans, including that of the thread's title.

Indeed. If they genuinely managed to achieve this they would be well ahead of their competitors, which I think is why it sounds fantastical rather than practical coming from a company that has trouble delivering anywhere near its claims.
 
My first thought was "a 25x improvement in h.265 / 4k decoding & encoding seems legit".
Quite sure that those count as typical workloads in a few years.
 
i was idly wondering the other day if the amount of xtors that have to be thrown at a general purpose processor to increase speed a fraction of a percent would be better spent on specialized hardware for specific, but somewhat common, problems. that's basically the approach taken in the phone SoC world. seems that's where AMD is headed.


Yeah...it's not an unreasonable goal either IMO. Think about how much stuff your CPU does that can be better offloaded to an ASIC. There's no reason we need a general purpose CPU to decode images when almost everything is just a jpg, png or gif. Same with encryption, other types of compression, etc. It sounds like more of a software challenge to get everyone on board with supporting hardware acceleration for just about everything.
 
i was idly wondering the other day if the amount of xtors that have to be thrown at a general purpose processor to increase speed a fraction of a percent would be better spent on specialized hardware for specific, but somewhat common, problems. that's basically the approach taken in the phone SoC world. seems that's where AMD is headed.
That's been true for a long time. The general problem has been this:
"Look, we made a special string processor that's 50x faster!"
"And now it will take us 100x the time to do anything with it. No thanks."

Very high level languages becoming pervasive has helped already, along with more general accelerators, and will only help more in the future. But, it's not like Intel, Samsung, Qualcomm, etc. aren't already working on most of that AMD is talking about, so...
 
That's been true for a long time. The general problem has been this:

"Look, we made a special string processor that's 50x faster!"

"And now it will take us 100x the time to do anything with it. No thanks."



Very high level languages becoming pervasive has helped already, along with more general accelerators, and will only help more in the future. But, it's not like Intel, Samsung, Qualcomm, etc. aren't already working on most of that AMD is talking about, so...


...it'll probably happen? This will never fly unless everyone is on board.
 
In 2020 I expect mainstream desktop CPUs to have 128GB of nonvolatile RAM on package, with a 800GB/s bandwidth. Even more importantly, I expect caches to be nonvolatile as well. That is the easiest way to deliver 25X power efficiency. That's why I been pushing the idea of a split NAND/DRAM bus. Because once we ramp up NVRAM, there is no need for the two separate types of memory. So they should start sharing the same interface now so that when they become available they will fit right into an existing storage paradigm. Microsoft is way way behind in this. Imagine how terribly badly Windows will run on a system with 128GB of on package nonvolatile RAM! The idea of copying 10-20GB of data from one area (The C drive, which would be a partition of the NVRAM) to actual "RAM" (which would be another partition of the same NVRAM). Yes, microsoft would really be that dumb. It makes my head spin imagining microsoft copying data to and from the same NVRAM when you launch an application.
 
In 2020 I expect mainstream desktop CPUs to have 128GB of nonvolatile RAM on package, with a 800GB/s bandwidth. Even more importantly, I expect caches to be nonvolatile as well. That is the easiest way to deliver 25X power efficiency. That's why I been pushing the idea of a split NAND/DRAM bus. Because once we ramp up NVRAM, there is no need for the two separate types of memory. So they should start sharing the same interface now so that when they become available they will fit right into an existing storage paradigm. Microsoft is way way behind in this. Imagine how terribly badly Windows will run on a system with 128GB of on package nonvolatile RAM! The idea of copying 10-20GB of data from one area (The C drive, which would be a partition of the NVRAM) to actual "RAM" (which would be another partition of the same NVRAM). Yes, microsoft would really be that dumb. It makes my head spin imagining microsoft copying data to and from the same NVRAM when you launch an application.

While I don't disagree on this, MS doesn't push technologies much in the industry. I am pretty sure they would come-up with a acceptable solution if said hardware was coming down the pike. They are a software company...look to AMD/Intel to drive a change as you describe. And they would only do so if the market is pushing them to do so, which does appear to be happening to some degree.
 
While I don't disagree on this, MS doesn't push technologies much in the industry. I am pretty sure they would come-up with a acceptable solution if said hardware was coming down the pike. They are a software company...look to AMD/Intel to drive a change as you describe. And they would only do so if the market is pushing them to do so, which does appear to be happening to some degree.

How is microsoft being held to task for technology that's not expected until 2020?
 
How is microsoft being held to task for technology that's not expected until 2020?

I definitely don't believe they should be held to task. I was (maybe not clearly) agreeing with the computing approach potentially providing the benefits described with a homogeneous bandwidth approach.

As I stated, if this sort of situation was to occur, MS would likely solve it with an acceptable solution rather than a brute force, poor implementation. Just my $0.02.
 
Back
Top