[AMD] K12 will be on 28nm

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
All you've done is quote the maximum possible short-burst Turbo OC voltage, not it's normal operating voltage. Either you didn't bother to read the article you're quoting, or you simply aren't honest enough to quote the context of the single images you're posting (as you've been caught doing before):-

"Notice however that the maximum Turbo clocks are the same - 2.60 GHz. Intel told us months ago that Broadwell-Y was designed to run at extremely low power and clock speeds when idle but still reach high performance and high clock speeds on demand, when needed, in short bursts. Most importantly, note the significant TDP difference between these two processors. The Core M 5Y70 only requires a thermal solution designed for 4.5 watts. The Core i5-4200U required a 15 watt system design - more than 3x the potential heat to dissipate.

The Yoga 3 Pro with the Core M 5Y70 is able to pull out a result that is 8% longer than the Core i5-4200U. That's great and all, but is made even more impressive when you learn that the Yoga 3 Pro has a 22% smaller battery in it! Lenovo shipped the latest Yoga machine with a 44 Whr battery compared to the 54 Whr battery found in last years Yoga 2 Pro."

http://www.pcper.com/reviews/Proces...ance-Testing-Broadwell-Y/Power-Consumption-an

Nice noticeable increase in battery life for a chip that (according to you) "uses more power" based solely on its max Turbo OC voltage state that it spends less than 1% of its life in. :thumbsdown: :rolleyes:

Also note that the Yoga 3 Core M is NOT 4.5w but actually 3.5w. They under-spec'd the CPU to lower TDP. According to the reviews, this probably is most affecting burst processing and GPU performance. 1w may not sound like a lot, but that's ~22% lower performance due to a smaller TDP, if you scale perf/watt equally.

Sorry - this is a little off-topic to the OT...

Not sure what to think about K12. Is it 28nm or not? Seems a little unclear? Hopefully more clear details are provided soon.
 

positivedoppler

Golden Member
Apr 30, 2012
1,149
256
136
Kumar needs to stop giving interviews. He's horrible.
http://seekingalpha.com/article/274...lobal-technology-conference-transcript?page=3

Is technology remain important? Absolutely, and we’ll continue to transition and we have our FinFET designs well underway, but we won’t be the first user, the bleeding edge of any new technology node. You will see us be a very, very fast follower, so we're right on track with our FinFET designs and what you will see next year is a really 28-nanometer and 20- nanometer products from AMD.

As we go forward, we want to leverage our expertise. So we did announce earlier this year that we have an ARM architectural license and so in ARM D8, we're designing and are on track to have that core code name K12 completed and available for shipment in designs come 2016.

For us is a very, very experienced company in developing 64-bit compute engines, we can offer both ARM and x86.

The last part x86 is very clearly Zen. Their next cpu design will focus on the server market first before hitting the consumer market.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
So Intel is not better in every metric after all. I.e. AMD will likely be better on perf/cost.

Also, the savings on perf/watt of course must be weighed against the cost of the CPU itself. If the CPU_A is much more expensive than CPU_B, then the TCO of CPU_A will not be lower than for CPU_B, even if CPU_A has better perf/watt. That is when calculated over it's lifetime.

Not often for servers. Electricity, cooling, and hardware costs are only part of the equation. Companies buy products based on potential revenue (performance) too if they are smart (or it is required for the job). Efficiency is extremely important as it allows for less cooling and allows more servers per unit floor space.

As for potential profit.

Say you are running simulations and putting a product through strenuous computer testing which is what most companies do rather than testing a physical product. System A costs $100,000 and system B costs $200,000. System B is 50% faster than A. However the five man team running the simulations make $100k a year each as highly trained professionals. They all have personal computers and share a mainframe (System A or B) for simulation tasks. Therefore buying system A results in a total departmental cost of something like $750,000 that year (adding other costs) while System B would cost management $850,000 that year. But System B is 50% faster meaning that if the 50% speedup allowed the team to get more than 13% more work done that year they would be ahead in terms of profits. For the sake of comparison say the 50% faster computer allowed them to get 20% more work done and that all work is equally weighted in terms of profits. The company is thus ahead in terms of profits. Next year however the same system is used and now the costs are $650k for either system except if you opted for B you get 20% more work done and therefore you profits increase accordingly. Regardless of perf/physical cost, performance offers a premium that enables revenue to be made.

As a simple example it is a wise investment to give a single engineer making $100k a year a $2000 workstation if it enables him to accomplish a project in 11 months vs. 1 year with his old system.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Kumar needs to stop giving interviews. He's horrible.
http://seekingalpha.com/article/274...lobal-technology-conference-transcript?page=3

Is technology remain important? Absolutely, and we’ll continue to transition and we have our FinFET designs well underway, but we won’t be the first user, the bleeding edge of any new technology node. You will see us be a very, very fast follower, so we're right on track with our FinFET designs and what you will see next year is a really 28-nanometer and 20- nanometer products from AMD.

As we go forward, we want to leverage our expertise. So we did announce earlier this year that we have an ARM architectural license and so in ARM D8, we're designing and are on track to have that core code name K12 completed and available for shipment in designs come 2016.

For us is a very, very experienced company in developing 64-bit compute engines, we can offer both ARM and x86.

The last part x86 is very clearly Zen. Their next cpu design will focus on the server market first before hitting the consumer market.

That's about the best news I have heard in a while. Hopefully they can create something like what they did with Sledgehammer. :)
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
Why would they chase the 2500k? You need to ask yourself why that's relevant before even asking it. In 2016, Sandybridge is 5 years old.

Because it is a very simple goal. If they cannot design a cpu that will scale up to and outperform a 5 year old 2500k (perf/watt), then they shouldnt even bother bringing that cpu to market, regardless of what gpu it has attached to it. And it has to be an across-the-board outperformer, not 7% better at some things and 50% worse at others. It has to outperform a 2500k the way a 4690K outperforms a 2500k.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
So going from 28nm in 2015 to finfet in 2016.

More evidence that GF 20nm was cancelled outright, in favor of licensed 14nm finfet from Samsung.

Looking at past execution perhaps one of the more sane or easy decisions.

Mubadale have been saving amd many time - cash - so besides the wsa amd is in all practicallity owned by Mubadala. Mubadala is a political controlled organization with political means. They eg wanted a few years back to place a brand new fab in the middle of the desert. Thats why decisions is very unpredictable. And extremely volatile for a area like this. Ofcource is weakening business thinking in both gf and amd. But thats the name of the game.

As for investing ones pension saving in amd one have to know basically one is investing in a political organization with non business goals. It cant be any worse in my book, but hey it smells like something new is underway.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Because it is a very simple goal. If they cannot design a cpu that will scale up to and outperform a 5 year old 2500k (perf/watt), then they shouldnt even bother bringing that cpu to market, regardless of what gpu it has attached to it. And it has to be an across-the-board outperformer, not 7% better at some things and 50% worse at others. It has to outperform a 2500k the way a 4690K outperforms a 2500k.


They did that 4 years, called bulldozer.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
The last part x86 is very clearly Zen. Their next cpu design will focus on the server market first before hitting the consumer market.

How did you make that conclusion?

AMD is gone in the server segment today. Betting the house on that is suicide.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
That's about the best news I have heard in a while. Hopefully they can create something like what they did with Sledgehammer. :)

Even if we imagine it would be server oriented. It would be "moar cores", how is that the best news? Would you buy say a 16 core Kabini for your desktop?
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Even if we imagine it would be server oriented. It would be "moar cores", how is that the best news? Would you buy say a 16 core Kabini for your desktop?

Of course some of these users would...

More cores still sells, irregardless of performance.

The issue here isn't actual performance, it's marketing. AMD could sell a wide variation of products. I know I could sell them to average consumers.

But you're talking about it being an actually good product... so no.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
So in other news, nothing has changed for the neutral viewer. Now let's see how is this spun by the usual suspects so it can align to their "AMD is doomed" mantra.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Even if we imagine it would be server oriented. It would be "moar cores", how is that the best news? Would you buy say a 16 core Kabini for your desktop?

Servers need to balance ST and MT performance for a variety of workloads. They also need to be optimized for perf/watt (something BD was NOT).

BD was the OPPOSITE of what a good server CPU should be...

A Kabini-based server option is more of a 'niche' server solution IMHO rather than a standard offering.

Edit: What's the best desktop CPU? An adopted Xeon for the socket 2011-3 setup. Many of the best CPUs for the past 10-12 years have been server-descendants. A64, S1366, S2011, S2011-3. The original Conroe was probably the exception because Intel was releasing the new nodes for consumer products prior to server then. Now, the focus is more on ultra-mobile and mobile, then server, then desktop (enthusiast).
 
Last edited:

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
So Intel is not better in every metric after all. I.e. AMD will likely be better on perf/cost.

Also, the savings on perf/watt of course must be weighed against the cost of the CPU itself. If the CPU_A is much more expensive than CPU_B, then the TCO of CPU_A will not be lower than for CPU_B, even if CPU_A has better perf/watt. That is when calculated over it's lifetime.

Actually the TCO of AMD's server line is horrible due to software licensing costs. A large portion of enterprise software is licensed by the CPU core. AMD needing more cores to give the same application performance drives TCO through the roof.

The actual cost of the CPU in server environments is nearly a rounding error.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Stubborness is not an argument, i guess that thoses numbers hurts your brand preference, it s you that are relying on myths and deflections as a mean to patheticaly negates numbers..

Still looking for you to provide proof that a chip design can't result in a 10% power consumption reduction. What you posted doesn't show anything of the sort.

Just admit you were wrong before your stubbornness makes your posting look even worse.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
So in other news, nothing has changed for the neutral viewer. Now let's see how is this spun by the usual suspects so it can align to their "AMD is doomed" mantra.

Nobody claims they are doomed. Just that they are irrelevant and are heading the VIA route.
 

EightySix Four

Diamond Member
Jul 17, 2004
5,122
52
91
Not often for servers. Electricity, cooling, and hardware costs are only part of the equation. Companies buy products based on potential revenue (performance) too if they are smart (or it is required for the job). Efficiency is extremely important as it allows for less cooling and allows more servers per unit floor space.

As for potential profit.

Say you are running simulations and putting a product through strenuous computer testing which is what most companies do rather than testing a physical product. System A costs $100,000 and system B costs $200,000. System B is 50% faster than A. However the five man team running the simulations make $100k a year each as highly trained professionals. They all have personal computers and share a mainframe (System A or B) for simulation tasks. Therefore buying system A results in a total departmental cost of something like $750,000 that year (adding other costs) while System B would cost management $850,000 that year. But System B is 50% faster meaning that if the 50% speedup allowed the team to get more than 13% more work done that year they would be ahead in terms of profits. For the sake of comparison say the 50% faster computer allowed them to get 20% more work done and that all work is equally weighted in terms of profits. The company is thus ahead in terms of profits. Next year however the same system is used and now the costs are $650k for either system except if you opted for B you get 20% more work done and therefore you profits increase accordingly. Regardless of perf/physical cost, performance offers a premium that enables revenue to be made.

As a simple example it is a wise investment to give a single engineer making $100k a year a $2000 workstation if it enables him to accomplish a project in 11 months vs. 1 year with his old system.

Not only this, but costs in a datacenter are completely different.

When I get space a datacenter I get X number of amps for every Y number of square feet. X and Y are linearly correlated. If I need more amps, I have to get more square ft, even if I'm not using the physical space and occasionally even if that extra space is on the other side of the datacenter. Considering my hardware costs pale in comparison to my datacenter costs, performance density/watt matters more to me than anything else.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I wish people had a better understanding of how the costs of space at a datacenter work.

When I get space a datacenter I get X number of amps for every Y number of square feet. X and Y are linearly correlated. If I need more amps, I have to get more square ft, even if I'm not using the physical space and occasionally even if that extra space is on the other side of the datacenter. Considering my hardware costs pale in comparison to my datacenter costs, performance density/watt matters more to me than anything else.

Far from all datacenters work that way.

I havent had a signle center yet where I couldnt order what I needed without having to get something "extra".
 

EightySix Four

Diamond Member
Jul 17, 2004
5,122
52
91
Far from all datacenters work that way.

I havent had a signle center yet where I couldnt order what I needed without having to get something "extra".

Not sure of the type of density you're working with, but for me it entirely depends on the cooling capacity of the datacenter itself. Datacenters can only dissipate so much heat, so they take their max/total ft^2 and give you a max amps/ft^2. We run most of our stuff on the borderline and are always looking to increase compute without increasing power draw. For us we'll pay a significant hardware premium for dense low-power options to minimize footprint and power draw.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,348
642
121
Not sure of the type of density you're working with, but for me it entirely depends on the cooling capacity of the datacenter itself. Datacenters can only dissipate so much heat, so they take their max/total ft^2 and give you a max amps/ft^2. We run most of our stuff on the borderline and are always looking to increase compute without increasing power draw. For us we'll pay a significant hardware premium for dense low-power options to minimize footprint and power draw.

This is what intel talked about at their investor meeting I believe as well. That they saw many of their server purchasing clients moving to more high end solutions because of this.

This is why I wonder if AMD could be successful on the server market against intel when they'll still be on 28 nm vs intel's skylake servers at 14nm process that will be much more mature at that time. Even when AMD gets a 14 nm competitor, intel will be on 10nm by that time with canonlake.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
This is what intel talked about at their investor meeting I believe as well. That they saw many of their server purchasing clients moving to more high end solutions because of this.

This is why I wonder if AMD could be successful on the server market against intel when they'll still be on 28 nm vs intel's skylake servers at 14nm process that will be much more mature at that time. Even when AMD gets a 14 nm competitor, intel will be on 10nm by that time with canonlake.

Their only chance IMO is for Seamicro derived many small core servers. E.g. customer needs maximum RAM / Network bandwidth per blade at the lowest power possible. Memcache front ends, static web server front ends, CouchDB or other scale out NoSQL nodes, cold storage, that sort of thing. Where the quantity of memory/storage at the lowest power is required. Seattle and its ilk might succeed here but it depends on their channel execution and the ability to follow up on Seattle. As long as their demo showing Hadoop running on Seattle wasn't just a one off stunt, and actually an indication that they're dedicating real resources to adapting popular open source backend frameworks onto ARMv8 and Seattle specifically, they have a chance. AMD has never been particularly focused on software though so I'm not super hopeful. Either that or they somehow begin iterating a lot more quickly on the Cat cores. They had a good thing there for microservers and they are letting it languish
 

Shehriazad

Senior member
Nov 3, 2014
555
2
46
All I can say is meeeeeeh.

On the CPU side for 2015 AMD might actually drop below 3% in the Desktop market. A real recovery seems unlikely unless they suddenly brought out some magic 14NM/DDR4/HBM AMD Zen 8-16 core that clocks at 4GHZ with an IPC that is at least triple that of Piledriver.

2015 is pretty much all GPU and consoles for AMD...not even sure why they're wasting their budget on the mobile market...I still think that this is wasted money...you simply cannot compete with free.