How will AMD answer the challenge posed by Haswell?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Something I noticed from Intel's slides were that Desktop Haswell will only come with GT2 graphics. The GT3 graphics that everyone says will catch up with Trinity won't be available on the desktop. Intel apparently doesn't buy into the AMD idea that a destop iGPU is worth anything.

If that's true then AMD will have no competition against Kaveri. Hell, even Haswell's GT2 will not be able to catch Trinity's Graphics.
 

Abwx

Lifer
Apr 2, 2011
12,038
5,014
136
How is this an advantage to AMD? Is this something Intel doesn't have access to?

This will benefit most to the smaller firms , AMD included ,
since they lacked the workforce when time consuming
semi manual design was the rule.

For Intel this will surely mean technical layoffs unless
they extend their product portfolio to make room
for these future excess capacities , a route they
already started to take by entering the SSD as well
as the mobile devices markets.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,134
3,671
126
AMD's response..

Our xdozer has greater IPC!
IPC is whats important right? not how fast the cpu is or how efficient it is.
If it doesnt do IPC numbers its not a real cpu!

Our trinity offers supperior IPC! of course u'll never see it in the normal activities, and it may feel slower then an intel, but rest assured, your IPC numbers will ROCK!


I gave up on AMD for performance.
I only look at AMD for efficiency and value now.

AMD's i feel have a better bang for buck vs the lower end intels.
On the higher end tho... its like asking Yugo, to make me a 1 million dollar exotic sports car to race against a bugatti vayron.

It just wont happen... not again in our life time at least i predict.

This will benefit most to the smaller firms , AMD included ,
since they lacked the workforce when time consuming
semi manual design was the rule.

im sorry, intel has 3290423849023489230423 fabs for almost everything.
They have FAB;s to make other FAB's even.

In my eyes AMD is not anywhere near intel's league anymore because of the lack of FAB's.
Having internal FAB's makes stream lining and production much more efficient.
Having to wait on FAB's makes revisions and progress more redundant.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
12,038
5,014
136
In my eyes AMD is not anywhere near intel's league anymore because of the lack of FAB's.
Having internal FAB's makes stream lining and production much more efficient.
Having to wait on FAB's makes revisions and progress more redundant.

Surely that they have not access to vast manufacturing capacities
but as far as the revision process is concerned it will take about
two months to cram a CPU even in Intels facilities.

Then respinning the dies is time and workforce consuming
and this is where automated tools allow to reduce time
to market process.
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
This will benefit most to the smaller firms , AMD included ,
since they lacked the workforce when time consuming
semi manual design was the rule.

For Intel this will surely mean technical layoffs unless
they extend their product portfolio to make room
for these future excess capacities , a route they
already started to take by entering the SSD as well
as the mobile devices markets.

Heh, I hope not. I've started look at what automated flows can do and let me tell you, they kind of suck. If I had to summary the pros and cons:

Pro for automation:
1) Able to get lots of architectural features in REALLY fast, but the features are limited due to design constraints.

Pro for custom:
2) Able to get only the top architectural features but they are more robust implementations (able to do more).

I see there's huge value for automation and I think either the tools have to improve a ton OR the performance lead has to be so great that you can start degrading a bunch of features to make them automation friendly.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
If that's true then AMD will have no competition against Kaveri. Hell, even Haswell's GT2 will not be able to catch Trinity's Graphics.

Nothing in this thread has anything to do with graphics performance. This is how does AMD compete with Intel (and the rest of the industry) into low power architectures.
 

pantsaregood

Senior member
Feb 13, 2011
993
37
91
AMD has surprised me before. In the 1990s, AMD was considered "cheap" and of poor quality. With K7, AMD sent Intel on a panic. With a fraction of Intel's budget, AMD designed a CPU that was capable of outperforming the Pentium 3.

AMD ran with that design and developed K8. Intel was completely unable to compete in performance at this point, and AMD's sales began picking up after years of being the "other guy." At some point in 2005, AMD surpassed Intel's shipments for around a month. That was completely unheard of at the time.

They're down, but I'm hoping they aren't out. The Bulldozer design just isn't working. They can't afford to throw six years away into Bulldozer like Intel did with Netburst. They just don't have the money to sustain it.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
They can't afford to throw six years away into Bulldozer like Intel did with Netburst. They just don't have the money to sustain it.

They've already thrown 6 years into Bulldozer. 2013 will be year #7.

If Intel does well against ARM, AMD will be alive and kicking for a while longer. Intel can't afford to lose AMD as far as x86 goes. It's difficult to sell a proprietary ISA and its surrounding architecture if there are no viable alternatives. Companies would never take that sort of risk.

If AMD decides to dip out of x86 and start concentrating elsewhere... well that would certainly be interesting ;)
 

Ajay

Lifer
Jan 8, 2001
16,094
8,116
136
Heh, I hope not. I've started look at what automated flows can do and let me tell you, they kind of suck. If I had to summary the pros and cons:

Pro for automation:
1) Able to get lots of architectural features in REALLY fast, but the features are limited due to design constraints.

Pro for custom:
2) Able to get only the top architectural features but they are more robust implementations (able to do more).

I see there's huge value for automation and I think either the tools have to improve a ton OR the performance lead has to be so great that you can start degrading a bunch of features to make them automation friendly.

Hmm, we hear good things about EDA and bad things about EDA. Supposedly EDA sucked on BD, but has worked wonders (at least in terms of die area) on SR. Well, Intel's all set, it has enough designers to do everything by hand if it wants to (I thought they used some EDA?).
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Nothing in this thread has anything to do with graphics performance. This is how does AMD compete with Intel (and the rest of the industry) into low power architectures.

AMDs APU might still have the 'overall' edge on the iGPU side, but their lead in TDP limited mobile SKUs might be all but eroded. As it is, against Ivy Bridge their lead in mobile iGPUs is much smaller than on desktop, all down to TDP. With Haswell, in the 15 watt arena Intel might even pull ahead on the iGPU side.

I believe it has, both in Desktop and Mobile and soon it will start in Server.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Heh, I hope not. I've started look at what automated flows can do and let me tell you, they kind of suck. If I had to summary the pros and cons:

Pro for automation:
1) Able to get lots of architectural features in REALLY fast, but the features are limited due to design constraints.

Pro for custom:
2) Able to get only the top architectural features but they are more robust implementations (able to do more).

I see there's huge value for automation and I think either the tools have to improve a ton OR the performance lead has to be so great that you can start degrading a bunch of features to make them automation friendly.

My personal view on automated design is that it is poised for huge explosion in optimizations and deliverables in a very similar analogy to that which software compilers experienced at the end of the 70's when people were still attempting to write assembly en masse.

Sure nothing beat code done in assembly, hand tuned by the brightest minds money could buy. But compilers made it so much easier to manage the growing complexity of code bases.

Windows 7 would not be humanely possible without the benefit of compilers, no matter how much you might argue that it would be all the better had Microsoft employed 3B people working for 50yrs to do it in assembly.

And the compiler tools themselves have gotten good, pretty darn good in fact.

And IMO so too is the future of layout and design. For sure no automated tool will rival the raw capabilities of an experienced engineer. No compiler can rival the raw capabilities of an experienced assembly programmer.

But at some point the sheer complexity of the project itself will come to represent such a managerial nightmare, not to mention a resource nightmare in terms of 1000's of engineers working to do it the hard way, that the benefits of falling back to increasingly more sophisticated tools will, in the end, result in ever more complex designs coming to market all the sooner regardless what could have been had it been done by hand old-school style.

Imagine where we'd be today if Intel didn't dive into the compiler market and invest as heavily as they have in creating the tools necessary for today's programmers to avoid programming in assembly? Now imagine if they elected to direct a similar campaign in the field of automated design tools?

Nothing in this thread has anything to do with graphics performance. This is how does AMD compete with Intel (and the rest of the industry) into low power architectures.

It may be that AMD is literally left with what will become a niche market - the desktop segment. They already have 43% marketshare. As laptop grows (low power) and as server grows (high performance), AMD may literally be squeezed out of both markets for lack of R&D to pursue high performance and lack of access to competitive process technology to pursue the mobile space.

So the niche market they find themselves occupying may turn out to be the shell of one they pursued for so long. That would be ironic.

AMD has surprised me before. In the 1990s, AMD was considered "cheap" and of poor quality. With K7, AMD sent Intel on a panic. With a fraction of Intel's budget, AMD designed a CPU that was capable of outperforming the Pentium 3.

AMD ran with that design and developed K8. Intel was completely unable to compete in performance at this point, and AMD's sales began picking up after years of being the "other guy." At some point in 2005, AMD surpassed Intel's shipments for around a month. That was completely unheard of at the time.

They're down, but I'm hoping they aren't out. The Bulldozer design just isn't working. They can't afford to throw six years away into Bulldozer like Intel did with Netburst. They just don't have the money to sustain it.

The K7 was made possible because DEC (creator of the famous Alpha microprocessor line) imploded and went bankrupt, sending entire teams of experienced design and layout engineers out into the industry looking for employment. Intel snapped up some, but AMD snapped up even more.

One of them was Dirk Meyer. Yes, THE Dirk Meyer. He was tasked with heading the K7 Athlon design.

The irony there of course is that he became the internal sensation that he became, rising all the way to CEO, because of his success with the K7...and yet it was under his watch that the Bulldozer microarchitecture was green-lighted and funded for development.

The problem was one of hubris. That point when AMD surpassed units shipped on a monthly basis was also the same point in time when Hector and Meyer decided to put the brakes on 65nm development and reduce funding for both 65nm and Phenom so they could make their quarterly numbers appealing to Wall Street, thinking Intel couldn't touch them at that point. Oh the folly.
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Hmm, we hear good things about EDA and bad things about EDA. Supposedly EDA sucked on BD, but has worked wonders (at least in terms of die area) on SR. Well, Intel's all set, it has enough designers to do everything by hand if it wants to (I thought they used some EDA?).

Depends which project you're talking about. Some projects are 90%+ automated. Others are like ~30%.

And IMO so too is the future of layout and design. For sure no automated tool will rival the raw capabilities of an experienced engineer. No compiler can rival the raw capabilities of an experienced assembly programmer.

But at some point the sheer complexity of the project itself will come to represent such a managerial nightmare, not to mention a resource nightmare in terms of 1000's of engineers working to do it the hard way, that the benefits of falling back to increasingly more sophisticated tools will, in the end, result in ever more complex designs coming to market all the sooner regardless what could have been had it been done by hand old-school style.

It's pretty much a resource nightmare already. Complexity issue aside, look at all the different SoCs that some design groups churn out in such a short period of time. If Intel wants to get that level of efficiency, more and more is going to get punted to automation. Maybe I'm just an old assembly type of guy but if I could get automation to do what I want, I'd be so happy. Right now it's mostly me banging my head on the wall on why automation is so dumb.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
8,534
7,799
136
Nothing in this thread has anything to do with graphics performance. This is how does AMD compete with Intel (and the rest of the industry) into low power architectures.

The only possibly way I could see them doing it is to license and make their own custom ARM designs. They could build some really bizarre x86/ARM hybrid where they pair some of their low power x86 cores with some ARM cores.
 

Haserath

Senior member
Sep 12, 2010
793
1
81
Attention all AMD employees. Assume position fetal upon Haswell release.

Thank you.
 

happysmiles

Senior member
May 1, 2012
340
0
0
Attention all AMD employees. Assume position fetal upon Haswell release.

Thank you.

kelso-burn.jpg
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Something I noticed from Intel's slides were that Desktop Haswell will only come with GT2 graphics. The GT3 graphics that everyone says will catch up with Trinity won't be available on the desktop. Intel apparently doesn't buy into the AMD idea that a destop iGPU is worth anything.

For Intel the desktop unit shipments comprise about 1/3rds of their total PC chip shipments. For AMD its almost 50%. So you can see why the priorities are different for both. Also with Intel the laptop ASPs are ~10% higher than their desktop chips, meaning the revenue share is less than 1/3rds, probably at 30% or so.
 

WhoBeDaPlaya

Diamond Member
Sep 15, 2000
7,415
404
126
Hey, as long as you don't use Avanti tools and get that infamous awkward error message "Please contact your Cadence representative" :biggrin:
Folks working in the field will know this story. I've heard it from many a (former and present) Cadence AE :)
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Hey, as long as you don't use Avanti tools and get that infamous awkward error message "Please contact your Cadence representative" :biggrin:
Folks working in the field will know this story. I've heard it from many a (former and present) Cadence AE :)

I prefer the times I'm using an in-house developed tool and it spits out "Error: You should not be having this error"
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I wonder if this will make a difference in "2nd" world nations? India and China are a HUGE market with growing middle classes. Then again, at times it has seemed like AMD couldn't sell water in the desert.

Well the GT 2 is 2x faster than the HD4000 thats faster than trinity. What the GT3 willl be we shall see
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Well the GT 2 is 2x faster than the HD4000 thats faster than trinity. What the GT3 willl be we shall see

Actually GT2 is probably gonna end up 25-50% faster than the Ivy Bridge GT2.

In mobile quad, the GT2 iGPU is alone enough to match/surpass Trinity. In Ultrabook-level SKUs, Ivy Bridge is already competitive/beating AMD. It's in desktops that they'll be still behind. There's a lot less incentive for Intel to invest in iGPUs in a market that isn't limited by power use and is way further behind discrete competitors.
 

WhoBeDaPlaya

Diamond Member
Sep 15, 2000
7,415
404
126
I prefer the times I'm using an in-house developed tool and it spits out "Error: You should not be having this error"
True, but it's double the LOLs when a tool from company A that contains stolen source code from company B spits that message out :p
 

nismotigerwvu

Golden Member
May 13, 2004
1,568
33
91
Its all about resources and an upwards spiralling cost factor for designs. Can you tell me AMDs revenue, AMDs R&D and Intels R&D?

Then you got your answer.



So you rather live in a dream thats flowered with whatever you wish? Should I say that sure, 1 AMD Ph.D will perform the same as 5 or 10 Intel Ph.Ds with 5 to 10x more resources at their disposal? Why not cheer for VIA then? Why should AMD be the exception?

Simply outspending the competition does not always equal success, but certainly is a great benefit. Intel's biggest challenges right now are avoiding diminishing returns on investment in both manpower (groupthink is inevitable in corporations and drastically reduces the benefit of large teams) and tech. Eventually the commanding lead they have in Si will matter less and less as we approach the limits of Si-based lithography. This by no means is saying Intel is not planning for the future, but that success in Si doesn't promise success in graphene or whatever "comes next" or that Intel is even investing heavily in whatever turns out to be "next". Additionally, if GPGPU really takes off (not seeming likely, but this is still a possibility) a commanding lead in X86 will be less important as in this case a CPU that is fast enough to keep the GPUs fed would be more than enough. Remember how quickly the wheels fell off 3dfx; no market is ever really and truly safe. Intel has deep pockets and unrivaled collective brainpower but stranger things have happened.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Heh, I hope not. I've started look at what automated flows can do and let me tell you, they kind of suck. If I had to summary the pros and cons:

Pro for automation:
1) Able to get lots of architectural features in REALLY fast, but the features are limited due to design constraints.

Pro for custom:
2) Able to get only the top architectural features but they are more robust implementations (able to do more).

I see there's huge value for automation and I think either the tools have to improve a ton OR the performance lead has to be so great that you can start degrading a bunch of features to make them automation friendly.

"Robust"? I think architecturally-robust (i.e. no performance glass jaws) are easier to do with automation because the physical design team can iterate more quickly on RTL changes. Robust to process variation? That hasn't been a problem I've seen with automation... do you mean something else by robust?

My impression, having some experience with custom design and some experience with 100% automated design, is that the benefits you get from customizing the implementation are probably outweighed by the benefits automation brings to the higher-level aspects of the design. If you froze the RTL and gave the team a year to build the design, sure, do it by hand...but that isn't how projects I've been part of have worked. The assembly/compiler analogy is actually pretty good - I'd rather have a decent merge/heap/quicksort than an optimal bubble/selection/insertion sort.

I'd love to hear concrete examples of your experience but I'm guessing that's not going to happen ;). In what way are architectural features limited by automation?
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I know this might seem out of place . But I really excited to see the up and coming Knights Corner or PHi Not for its performance so much but its power usage. Intel really is being more laid back than usual .
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
"Robust"? I think architecturally-robust (i.e. no performance glass jaws) are easier to do with automation because the physical design team can iterate more quickly on RTL changes. Robust to process variation? That hasn't been a problem I've seen with automation... do you mean something else by robust?

Not robust process variations but I guess you and I have different experiences. I see more glass jaws caused by automation because the design is unable to build the right topology to make a complex calculation and automation doesn't have the complex arrays that custom design can do. I've seen register files with over twice the read ports than any automated ones that I've seen because the custom design team was more aggressive with custom circuits.

So that's my meaning of robust, less glass jaws because they're able to design structures that can handle more things and more complex calculation/determinations because they can get a tighter timing loop. As for examples, I hope that gives you the general idea. There was something back in the other HSW thread where someone didn't understand how HSW added a 4th arithmetic unit. An automated design MAY cause compromises (like clustering).

But I'll be fair that again automation has its uses towards the end. A late bug or a late discovered miscorrelation near tapeout really sucks in a custom data path. I had to make several late saves and it was rough.
 
Last edited: