So Intel is not better in every metric after all. I.e. AMD will likely be better on perf/cost.
For certain scenarios, it's not worth to use AMD processors even if someone gives them for free.
So Intel is not better in every metric after all. I.e. AMD will likely be better on perf/cost.
All you've done is quote the maximum possible short-burst Turbo OC voltage, not it's normal operating voltage. Either you didn't bother to read the article you're quoting, or you simply aren't honest enough to quote the context of the single images you're posting (as you've been caught doing before):-
"Notice however that the maximum Turbo clocks are the same - 2.60 GHz. Intel told us months ago that Broadwell-Y was designed to run at extremely low power and clock speeds when idle but still reach high performance and high clock speeds on demand, when needed, in short bursts. Most importantly, note the significant TDP difference between these two processors. The Core M 5Y70 only requires a thermal solution designed for 4.5 watts. The Core i5-4200U required a 15 watt system design - more than 3x the potential heat to dissipate.
The Yoga 3 Pro with the Core M 5Y70 is able to pull out a result that is 8% longer than the Core i5-4200U. That's great and all, but is made even more impressive when you learn that the Yoga 3 Pro has a 22% smaller battery in it! Lenovo shipped the latest Yoga machine with a 44 Whr battery compared to the 54 Whr battery found in last years Yoga 2 Pro."
http://www.pcper.com/reviews/Proces...ance-Testing-Broadwell-Y/Power-Consumption-an
Nice noticeable increase in battery life for a chip that (according to you) "uses more power" based solely on its max Turbo OC voltage state that it spends less than 1% of its life in. :thumbsdown:![]()
So Intel is not better in every metric after all. I.e. AMD will likely be better on perf/cost.
Also, the savings on perf/watt of course must be weighed against the cost of the CPU itself. If the CPU_A is much more expensive than CPU_B, then the TCO of CPU_A will not be lower than for CPU_B, even if CPU_A has better perf/watt. That is when calculated over it's lifetime.
Kumar needs to stop giving interviews. He's horrible.
http://seekingalpha.com/article/274...lobal-technology-conference-transcript?page=3
Is technology remain important? Absolutely, and well continue to transition and we have our FinFET designs well underway, but we wont be the first user, the bleeding edge of any new technology node. You will see us be a very, very fast follower, so we're right on track with our FinFET designs and what you will see next year is a really 28-nanometer and 20- nanometer products from AMD.
As we go forward, we want to leverage our expertise. So we did announce earlier this year that we have an ARM architectural license and so in ARM D8, we're designing and are on track to have that core code name K12 completed and available for shipment in designs come 2016.
For us is a very, very experienced company in developing 64-bit compute engines, we can offer both ARM and x86.
The last part x86 is very clearly Zen. Their next cpu design will focus on the server market first before hitting the consumer market.
Ashraf emailed AMD. Apparently Devinder mis-spoke, and K12 is on a FinFET process and not 28nm: http://semiaccurate.com/forums/showthread.php?t=8379
(Sorry to steal your thunder Ashraf)
Why would they chase the 2500k? You need to ask yourself why that's relevant before even asking it. In 2016, Sandybridge is 5 years old.
So going from 28nm in 2015 to finfet in 2016.
More evidence that GF 20nm was cancelled outright, in favor of licensed 14nm finfet from Samsung.
Because it is a very simple goal. If they cannot design a cpu that will scale up to and outperform a 5 year old 2500k (perf/watt), then they shouldnt even bother bringing that cpu to market, regardless of what gpu it has attached to it. And it has to be an across-the-board outperformer, not 7% better at some things and 50% worse at others. It has to outperform a 2500k the way a 4690K outperforms a 2500k.
The last part x86 is very clearly Zen. Their next cpu design will focus on the server market first before hitting the consumer market.
They did that 4 years, called bulldozer.
That's about the best news I have heard in a while. Hopefully they can create something like what they did with Sledgehammer.![]()
Even if we imagine it would be server oriented. It would be "moar cores", how is that the best news? Would you buy say a 16 core Kabini for your desktop?
Even if we imagine it would be server oriented. It would be "moar cores", how is that the best news? Would you buy say a 16 core Kabini for your desktop?
So Intel is not better in every metric after all. I.e. AMD will likely be better on perf/cost.
Also, the savings on perf/watt of course must be weighed against the cost of the CPU itself. If the CPU_A is much more expensive than CPU_B, then the TCO of CPU_A will not be lower than for CPU_B, even if CPU_A has better perf/watt. That is when calculated over it's lifetime.
Stubborness is not an argument, i guess that thoses numbers hurts your brand preference, it s you that are relying on myths and deflections as a mean to patheticaly negates numbers..
So in other news, nothing has changed for the neutral viewer. Now let's see how is this spun by the usual suspects so it can align to their "AMD is doomed" mantra.
Not often for servers. Electricity, cooling, and hardware costs are only part of the equation. Companies buy products based on potential revenue (performance) too if they are smart (or it is required for the job). Efficiency is extremely important as it allows for less cooling and allows more servers per unit floor space.
As for potential profit.
Say you are running simulations and putting a product through strenuous computer testing which is what most companies do rather than testing a physical product. System A costs $100,000 and system B costs $200,000. System B is 50% faster than A. However the five man team running the simulations make $100k a year each as highly trained professionals. They all have personal computers and share a mainframe (System A or B) for simulation tasks. Therefore buying system A results in a total departmental cost of something like $750,000 that year (adding other costs) while System B would cost management $850,000 that year. But System B is 50% faster meaning that if the 50% speedup allowed the team to get more than 13% more work done that year they would be ahead in terms of profits. For the sake of comparison say the 50% faster computer allowed them to get 20% more work done and that all work is equally weighted in terms of profits. The company is thus ahead in terms of profits. Next year however the same system is used and now the costs are $650k for either system except if you opted for B you get 20% more work done and therefore you profits increase accordingly. Regardless of perf/physical cost, performance offers a premium that enables revenue to be made.
As a simple example it is a wise investment to give a single engineer making $100k a year a $2000 workstation if it enables him to accomplish a project in 11 months vs. 1 year with his old system.
I wish people had a better understanding of how the costs of space at a datacenter work.
When I get space a datacenter I get X number of amps for every Y number of square feet. X and Y are linearly correlated. If I need more amps, I have to get more square ft, even if I'm not using the physical space and occasionally even if that extra space is on the other side of the datacenter. Considering my hardware costs pale in comparison to my datacenter costs, performance density/watt matters more to me than anything else.
Far from all datacenters work that way.
I havent had a signle center yet where I couldnt order what I needed without having to get something "extra".
How did you make that conclusion?
AMD is gone in the server segment today. Betting the house on that is suicide.
Not sure of the type of density you're working with, but for me it entirely depends on the cooling capacity of the datacenter itself. Datacenters can only dissipate so much heat, so they take their max/total ft^2 and give you a max amps/ft^2. We run most of our stuff on the borderline and are always looking to increase compute without increasing power draw. For us we'll pay a significant hardware premium for dense low-power options to minimize footprint and power draw.
This is what intel talked about at their investor meeting I believe as well. That they saw many of their server purchasing clients moving to more high end solutions because of this.
This is why I wonder if AMD could be successful on the server market against intel when they'll still be on 28 nm vs intel's skylake servers at 14nm process that will be much more mature at that time. Even when AMD gets a 14 nm competitor, intel will be on 10nm by that time with canonlake.
