AMD manager speaks about Bulldozer, admits failure

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Kind of surprised that the only parts they have ready for this are Kabini with various parts fused off. They should have had a quad (or even oct) core die with a tinytinytiny GPU (the smallest thing that would still support remote desktop), with all the video decode etc stripped out. I thought that rapidly producing very tailored parts was the whole idea behind Jaguar, and their semicustom part push in general? So why is a harvested tablet chip trying to be a server chip?

Because they weren't planning on it AT ALL, I mean this seems to be a 2-3 months ago decision at most. (Edit: OK, more like 4-5 months but still it's as if they just kind of stumbled into the decision) That's why I've been saying this just lends more credence to people who think of AMD as a zombie.
 
Last edited:

sushiwarrior

Senior member
Mar 17, 2010
738
0
71
Kind of surprised that the only parts they have ready for this are Kabini with various parts fused off. They should have had a quad (or even oct) core die with a tinytinytiny GPU (the smallest thing that would still support remote desktop), with all the video decode etc stripped out. I thought that rapidly producing very tailored parts was the whole idea behind Jaguar, and their semicustom part push in general? So why is a harvested tablet chip trying to be a server chip?

AMD likes the idea of a bigger GPU to use for computing, whether or not it works with enough applications to be useful I don't know.

EDIT: And also, the applications these would be used for may not require any sort of display output. Very common to never "see" any output from the server.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
http://www.brightsideofnews.com/new...cro-talks-small-x86-microserver-business.aspx

A timeline is coming together, late 2012 after getting more integrated with AMD.

Fictional but probably cruelly accurate scene -

SeaMicro guys: "So when's that 64 bit ARM chip going to be ready?"
AMD CPU guys: "First half 2014."
SeaMicro guys: "Hmm, well these Piledriver parts aren't quite the best fit. Any chance we could accelerate and get ARM based stuff out the door by the end of 2013?"
AMD CPU guys: "AMD launch early? HAHAHAAHAHAHA"
SeaMicro guys *nervously laughing to fit in*: "... Let's talk about Jaguar then."
 

SPBHM

Diamond Member
Sep 12, 2012
5,066
418
126
Bulldozer was killed especially by its marketing rather than design.
The problem is the AMD advertised BD to be true 8 core experience, the 8 core CPU that performed slower than the competing quad cores. The CPU can have 100 cores but if it won't have computing power of 100 cores of competitor, you don't basically sell it as a 100 core, or you fail.
Just a bit better power optimizations and branding as dual cores with added threads and a bit lower prices would pwn the intels ass like nothing. The overclocking capabilities, turbo and greatly higher performance over intel's DC offerings would probably cause them to be selling like a hotcakes and greatly increasing revenues, they so badly needed at the time the Bulldozer was launched. Ideal for home computers and gamers but also enthusiasts, which was their target market for BD.


I don't think so, people who care about "quad vs dual" stuff but not enough to actually have a clue about what it really means, will go for the quad core, others will just ignore this.

I think selling $200 "8 core CPUs" is positive for AMD, even if they are not as fast as the "4 core" from Intel.

also if you call the FX 8350 a quad core and look at the single thread performance vs MT performance it's... wrong... it doesn't scale like a regular quad core would... it gains a lot going from "4 threads" to "8 threads" software, and ST is really weak... MT is the strongest point of Bulldozer, so I think using "8 core" for their marketing is a good decision.

It reminds me of how Intel used higher clock to their favor with marketing during the early P4 days... you had a P4 at 2GHz or an Athlon at 1.6ghz with the same performance... AMD even implemented "performance rating"... because otherwise it would be "Intel 2000 vs AMD 1600",

I've seen a lot of people considering AMD dual core (E-350) the same as a much faster Pentium dual core (like a G620 or something) and buying the slower CPU, simply because is a dual core.


but I do agree that AMD marketing was also a huge failure for the BD release... it was all wrong, the secrecy, all the hype, the delays...
and the product was simply not all that compelling for anything, it wasn't better for the servers, desktops or anything... it was even hard to see the advantages over Phenom II at first...

but I think AMD can improve the architecture/concept significantly, as PD has shown... so BD was certainly a failure for many reasons, it doesn't mean it can't be improved.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Kind of surprised that the only parts they have ready for this are Kabini with various parts fused off. They should have had a quad (or even oct) core die with a tinytinytiny GPU (the smallest thing that would still support remote desktop), with all the video decode etc stripped out. I thought that rapidly producing very tailored parts was the whole idea behind Jaguar, and their semicustom part push in general? So why is a harvested tablet chip trying to be a server chip?

The iGPU part in Jaguar is not that large, taking of half the GCN cores(64) would not gain you that much of a die area. Also, this is the first low power APU for servers and OpenCL performance of those 128 GCN cores are exeptional.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
http://www.brightsideofnews.com/new...cro-talks-small-x86-microserver-business.aspx

A timeline is coming together, late 2012 after getting more integrated with AMD.

Fictional but probably cruelly accurate scene -

SeaMicro guys: "So when's that 64 bit ARM chip going to be ready?"
AMD CPU guys: "First half 2014."
SeaMicro guys: "Hmm, well these Piledriver parts aren't quite the best fit. Any chance we could accelerate and get ARM based stuff out the door by the end of 2013?"
AMD CPU guys: "AMD launch early? HAHAHAAHAHAHA"
SeaMicro guys *nervously laughing to fit in*: "... Let's talk about Jaguar then."

I don't know, the timeline seems consistent with the comments made here:

http://www.theinquirer.net/inquirer...-seamicro-head-is-not-interested-in-arm-chips

It sounds like AMD is only interested in licensing ARM cores rather than making their own. Which makes sense - why make their own ARM server CPU instead of making x86 server CPUs with similar design targets, if they're making a new CPU either way? The so-called x86 tax is simply not significant enough for CPU targets relevant to most servers, and AMD has tons of experience with dealing with x86 designs and zero with ARM ones.

To put it another way, if AMD is interested in future ARM cores (most likely Cortex-A57) it's only because they don't have enough confidence that their own CPU design teams can meet the same goals in the same timeline. That also means that they're waiting on ARM, not their internal teams.
 

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
To put it another way, if AMD is interested in future ARM cores (most likely Cortex-A57) it's only because they don't have enough confidence that their own CPU design teams can meet the same goals in the same timeline. That also means that they're waiting on ARM, not their internal teams.
Consequently they aren't all too keen on x86 either I take it, as in they see ARM giving more perf/watt than x86 as it stands today, but if they are able to tweak Cortex-A57 just enough to waltz Intel then we might see a stronger resurgence in the server market for AMD !
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
http://www.extremetech.com/computin...rtex-a57-tape-out-chip-launching-no-time-soon

Is the 2014 wait because the target is 20nm TSMC? Why not a 28nm 64 bit ARM or heck even a 32nm GF?

Not having a willingness to spend some money and design time to beat competitors = AMD CPU guys: "AMD launch early? HAHAHAAHAHAHA"

Why not confront the GF WSA agreement head on and churn out mountains of 32nm 64 bit ARM chips and flog them at prices no one else can hope to match? Nope instead they'd rather pay to not receive product.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Kind of surprised that the only parts they have ready for this are Kabini with various parts fused off. They should have had a quad (or even oct) core die with a tinytinytiny GPU (the smallest thing that would still support remote desktop), with all the video decode etc stripped out. I thought that rapidly producing very tailored parts was the whole idea behind Jaguar, and their semicustom part push in general? So why is a harvested tablet chip trying to be a server chip?
They should just included everything in the chip in the first place, and allow the GPU to be powered off entirely. Just disable server-centric features on the consumer models. Today, as long as the memory got ECC, caches got ECC, and registers got parity, you wouldn't need anything else for the CPU, that's not in consumer models (as in, let POWER(n+1), IA64, SPARC T(n+1), and Xeon E7 CPUs duke it out for critical big business work).

If you were to painstakingly follow my posts back, it's not 20/20 hindsight, either. I was amazed they didn't do it the first time around. If they start in 2014, they won't have had the head-start that others could see the potential for from miles away, and will instead be launching when the competition is, too, and not necessarily with parts that are much better, if at all better. It's not going to be an easy road, that's for sure. They are starting down what look like good paths, but not early enough, with Jaguar for servers just being talked about now.
 
Last edited:

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Kind of surprised that the only parts they have ready for this are Kabini with various parts fused off. They should have had a quad (or even oct) core die with a tinytinytiny GPU (the smallest thing that would still support remote desktop), with all the video decode etc stripped out. I thought that rapidly producing very tailored parts was the whole idea behind Jaguar, and their semicustom part push in general? So why is a harvested tablet chip trying to be a server chip?

Only die shot I could find and its from semiaccurate so take a grain of salt.

Kabini_die_shot_labled.jpg
 

Mallibu

Senior member
Jun 20, 2011
243
0
0
So, since official mouths are admiting failure I guess we can leave the tale of "FX performs worse because of compilers/OS/memory/etc" to rest in peace. :awe:
 

Gikaseixas

Platinum Member
Jul 1, 2004
2,836
218
106
Not really. PD performs much better than BD, it's clearly a step in the right direction.

Amazing how all Intel shrek's are lining up to write 1 paragraph that doesn't contribute to the conversation at all.

Trolls at their best hehehehehehehe

Anyway it just shows caracter to come forward and admit defeat in a battle. The war is still going.
 
Last edited:

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
Not really. PD performs much better than BD, it's clearly a step in the right direction.

Amazing how all Intel shrek's are lining up to write 1 paragraph that doesn't contribute to the conversation at all.

Trolls at their best hehehehehehehe

Anyway it just shows caracter to come forward and admit defeat in a battle. The war is still going.

The Pentium 4 performed better than the Pentium 3 - but I dont see any Netburst architecture derivatives around, do you?
 

NTMBK

Lifer
Nov 14, 2011
10,448
5,831
136
The iGPU part in Jaguar is not that large, taking of half the GCN cores(64) would not gain you that much of a die area. Also, this is the first low power APU for servers and OpenCL performance of those 128 GCN cores are exeptional.

Ah, I'd forgotten that GCN came in a minimum unit of 64 cores. Good point.

I definitely agree that a low power server APU is a good product to be launching, but they should also launch a low power server CPU- different products to cover all potential customers. But judging by that die shot the GPU isn't that large a portion of the APU, so I can see why it wasn't worth producing a new die.
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
The Pentium 4 performed better than the Pentium 3 - but I dont see any Netburst architecture derivatives around, do you?

I think the difference is that AMD does not have another architecture to run with. Jaguar is great, but I am skeptical that it can be scaled up (although maybe that isn't necessary with desktops dying an all :biggrin:). K10 was also decent, but is a pretty old core and I imagine might need more R&D to modernize than just fixing up the construction equipment cores...

Frankly, I think its more likely for AMD to drop out of big cores altogether than start over again with a new architecture. And to be clear, I do not think they are going to do that. I think they are going to keep iterating on BD, PD, etc.
 

Ayah

Platinum Member
Jan 1, 2006
2,512
1
81
Those are Wilamettes I believe, which were bleh at best.
He's probably referring to the Northwoods which I'd consider to be the best (time/competition-relative) of the series
 

BUnit1701

Senior member
May 1, 2013
853
1
0
Maybe, but this is equivalent to saying that when those first P4 chips came out, Intel should have scrapped the whole line and started over.
 

inf64

Diamond Member
Mar 11, 2011
3,884
4,692
136
PD based 8350 is just spanking the poor 1100T in 95% of the workloads(often by a very large margin) and OCs much better at the same time. Yet still we see people saying it's crap and even recommending old Phenom IIs over 8300/6300 (ignorance or they are just stubborn?).
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Well the last retail Phenom II is the 965 BE and it's going for ~$90, certainly a viable choice if you were considering the 4300. Also I'm really not compelled to swap out my 1090T for a 8320/8350. Steamroller should be convincing enough for me to retire the old workhorse, assuming it comes out in an AM3+ variety. The only reason the 8120/8150 moved at all, imo, is looking good on the spec sheet to the under informed and OC types looking to hit 5GHz for bragging rights.

Only way Bulldozer beats the P4 wrap is if Steamroller is a repeat of Bulldozer -> Vishera improvement.
 
Last edited:

Tuna-Fish

Golden Member
Mar 4, 2011
1,668
2,541
136
The iGPU part in Jaguar is not that large, taking of half the GCN cores(64) would not gain you that much of a die area. Also, this is the first low power APU for servers and OpenCL performance of those 128 GCN cores are exeptional.

The performance is exceptional at things the users of the chip won't do. Seriously, there are parts of the server market where OpenCL performance will be very nice to have, but the seamicro customers are not them. What they want is something that runs PHP, Python and Ruby well at low cost, and hell will freeze over before OpenCL will either accelerate any of those tasks or the customers will expend the engineering resources to use OpenCL directly. (The decision to base your business on scripting languages is based on the idea of reducing programming costs. OpenCL costs a lot to program for.)
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Anyway it just shows caracter to come forward and admit defeat in a battle. The war is still going.

They already conceded the war months ago when they said they would no longer compete in the high performance desktop segment.

There is a new war though, ultra low power and iGPU.
 
Status
Not open for further replies.