Fudzilla: Bulldozer performance figures are in

Page 21 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

sandorski

No Lifer
Oct 10, 1999
70,677
6,250
126
They need to capture the genius that was/is Apple's iPhone App store.

Enabling scores of at-home freelance programmers and entrepreneurs easy access to a publisher/subscriber network model.

If they sit around waiting for the likes of Adobe and MS to get off their duff and roll out leading-edge OpenCL apps then they will be screwed like 3DNow! all over again.

They need to smartly manage this ecosystem no less than Apple did theirs IMO.

Weren't they working on something like this? Back when Brazos was released, I seem to recall some mentioning of just such a thing.
 
Aug 11, 2008
10,451
642
126
I don't know about anyone else, but I'm tired of announcements, facebook blurbs, tech events, more announcements, marketing slides, rumors.

Where is Bulldozer? Come on AMD at least give us a release date, not some vague "Q3" nonsense.

I totally agree with you. Cut out the BS and just get Bulldozer out the door.

And after all the wait for Llano, it was mediocre at best, except maybe for laptops, and AMD hyped the hell out of it.

Really makes me wonder if Bulldozer will be a disappointment. Someone said it reminded them of Fermi, and it does, but it reminds me of the Phenom I launch also that was delayed because they could not get the clockspeed up to acceptable levels, and when they did, it was still a poor performer.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
And after all the wait for Llano, it was mediocre at best, except maybe for laptops, and AMD hyped the hell out of it.
I think Llano is brilliant. If you focus in on only certain benchmarks, or certain aspects, it can be made to look so so. But as an overall package, I think it's amazing. It received a cool reception in the Anandtech review, but reading other reviews, it got the "editors choice" type award many times.
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
I think its pretty obvious bulldozer was designed from the ground up for graphics instead of competing head on with Intel. As you start to add things like a gpu onto the same chip the cpu itself becomes less critical to overall performance as Llano already demonstrates so well. Squeezing more cores into the same die space just leaves that much more room for streaming processors and "trinity" is scheduled for release next year to compete with ivy bridge. AMD has to think long term and if that means sacrificing short term profits to ensure trinity succeeds so be it.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
I totally agree with you. Cut out the BS and just get Bulldozer out the door.

And after all the wait for Llano, it was mediocre at best, except maybe for laptops, and AMD hyped the hell out of it.

Really makes me wonder if Bulldozer will be a disappointment. Someone said it reminded them of Fermi, and it does, but it reminds me of the Phenom I launch also that was delayed because they could not get the clockspeed up to acceptable levels, and when they did, it was still a poor performer.

AMD experienced similiar node-transition challenges with their X2's in going from 90nm to 65nm as well.

Today's Llano is the first silicon on 32nm. Just look at the improvements made to PhII X4's and X6's over the course of 2 yrs on 45nm. Llano clocks will improve, as will power-consumption.

It is true that Llano will simply never meet the hype that was spun for Fusion going back to the time of the ATI purchase, the bandwidth disparity at the CPU socket verus that which is available to GPU's on discreet PCB's is just to much to be overcome by the improvements brought on by latency reductions in marrying GPU logic to CPU logic on-die.

For Fusion to reach its full potential a paradigm shift in the platform has to occur - the CPU/dGPU needs to be on the discreet GPU PCB with access to a few GB's of GDDR5 (or better). Or the mobo itself needs to absorb the component functionality present on the discreet GPU PCB (including the fixed GDDR memory).
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
I think its pretty obvious bulldozer was designed from the ground up for graphics instead of competing head on with Intel.
I've been wondering about this for some time. When was the design of Bulldozer more or less locked down? And at that time, was AMD far enough along in assimilating ATI that were able to allow for GPU tech to be easily incorporated into Bulldozer?

Supposedly Bulldozer is modular enough in nature to allow the above. It's an important point, Bulldozer is late enough as it is, it would certainly help if AMD could release Trinity in short order after Bulldozer.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
Today's Llano is the first silicon on 32nm. Just look at the improvements made to PhII X4's and X6's over the course of 2 yrs on 45nm. Llano clocks will improve, as will power-consumption.
No fabrication expert, but I've certainly noticed that GloFo steadily improves their process on a particular node. Their 45nm stuff at the end of it's cycle was much better than when it first hit.
It is true that Llano will simply never meet the hype that was spun for Fusion going back to the time of the ATI purchase, the bandwidth disparity at the CPU socket verus that which is available to GPU's on discreet PCB's is just to much to be overcome by the improvements brought on by latency reductions in marrying GPU logic to CPU logic on-die.
Sideport?
For Fusion to reach its full potential a paradigm shift in the platform has to occur - the CPU/dGPU needs to be on the discreet GPU PCB with access to a few GB's of GDDR5 (or better). Or the mobo itself needs to absorb the component functionality present on the discreet GPU PCB (including the fixed GDDR memory).
But wouldn't that just end up being effectively what we have now? A CPU with a discreet GPU on the motherboard? It all comes down to cost sensitivity, by the time you add some GDDR memory etc. you may as well just have a traditional system. That's where I think some people go wrong with trying to evaluate Fusion. It's not about a quantum leap in performance (although it certainly has little competition when it comes to onboard graphics), it's about cost cutting, integration, lower power consumption. I'm actually surprised the GPU performance is as good as it is, given the severely limited memory bandwidth.

But I get your concept about Fusion reaching its full potential. Perhaps we will see something like you describe for the professional market.
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
I've been wondering about this for some time. When was the design of Bulldozer more or less locked down? And at that time, was AMD far enough along in assimilating ATI that were able to allow for GPU tech to be easily incorporated into Bulldozer?

Supposedly Bulldozer is modular enough in nature to allow the above. It's an important point, Bulldozer is late enough as it is, it would certainly help if AMD could release Trinity in short order after Bulldozer.

Bulldozer is essentially a physical version of Intel's virtual hyperthreading and I'd guess the idea first came up almost ten years ago when Intel introduced hyperthreading. It probably took a few years more to realize the potential advantages and disadvantages of the design and eventually they acquired ATI a few years ago in time to begin working on the fusion designs in detail. At any rate, the fact that Llano uses the old stars core and still responds so well to overclocked system ram bodes well for trinity. The fact they acquired ATI and began emphasizing "fusion is the future" makes me think we won't begin to understand the ramifications of bulldozer until trinity arrives.
 
Last edited:

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
My point about AMD is that Nvidia is bolting their core competency (advanced GPU stuff) onto ARM cores and selling them to a market that clearly wants to buy them.

I don't see how or why AMD could not just as easily do the same given that they already have the only other competitive GPU architecture on the planet and ARM is a commodity you can license and just bolt your GPU to (if you want to do no more, of course you can do more as TI does with OMAP and so on).

Intel is in a more precarious situation as they don't have a competitive GPU architecture to bolt onto an ARM to compete for tablet sockets against Nvidia (or possibly AMD).

I wasn't trying to make any deeper a connection than that.

You do nvidia down there. There's more to tegra then a good gpu. Nvidia have been on this path for quite a few years now. They have either bought or developed all the other bits required for a good mobile SOC, and they have a strong software team to back it up.

All these projects take a while to get going and build up momentum. Right now nvidia is flying, and AMD is standing still. It would take AMD a long time to catch up, especially if they don't have the deep pockets required to give quickly buy all the other parts of the SOC puzzle. AMD's momentum is in it's x86 fusion parts. Their best bet is to keep shrinking them and hope at some point they become competitive with ARM. This is a bit of a long shot however as ARM is cheaper, more efficient, and more flexible (x86 only offers finished chips/motherboards, ARM offers you everything from that to the processor design for you do to whatever you want with). x86 basically has little going for it other then the Intel gorilla pushing it.
 

JimKiler

Diamond Member
Oct 10, 2002
3,561
206
106
I don't know much about nikon, but the k5 is at best even with the canon 60d but costs $500 more. That's like AMD charging $800 for the top end BD even though it is only, at best, even with 2600k.

The Canon 60D is only a 79% on dpreview.com whereas the K-5 is a 83%. Since Nikon and Pentax released their newest APC-S sensors with high ISO capabilities Canon is not in the top spot for APC-S sensors. But with DSLR cameras the difference really is minute between the top dogs.

But camera makers are neck and neck and often giving up the lead to someone else who in turn plays catch up and over takes the lead. We would all be in a good situation if CPU's were in the same boat, of constant catch up and new performance leaders every 6 months.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
You do nvidia down there. There's more to tegra then a good gpu. Nvidia have been on this path for quite a few years now. They have either bought or developed all the other bits required for a good mobile SOC, and they have a strong software team to back it up.

All these projects take a while to get going and build up momentum. Right now nvidia is flying, and AMD is standing still. It would take AMD a long time to catch up, especially if they don't have the deep pockets required to give quickly buy all the other parts of the SOC puzzle. AMD's momentum is in it's x86 fusion parts. Their best bet is to keep shrinking them and hope at some point they become competitive with ARM. This is a bit of a long shot however as ARM is cheaper, more efficient, and more flexible (x86 only offers finished chips/motherboards, ARM offers you everything from that to the processor design for you do to whatever you want with). x86 basically has little going for it other then the Intel gorilla pushing it.

ARM does less with less. ARM cannot magically build a quad-core CPU that is on-par with a SB and still have it use 2W of power. They simply offer poor-performing processors that use a low amount of energy.

Intel and AMD can and will offer these types of solutions down the road and they have superior manufacturing techniques over what ARM has generally. I am very curious to see what they can do in the next few years to break into this market.
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
ARM does less with less. ARM cannot magically build a quad-core CPU that is on-par with a SB and still have it use 2W of power. They simply offer poor-performing processors that use a low amount of energy.

Intel and AMD can and will offer these types of solutions down the road and they have superior manufacturing techniques over what ARM has generally. I am very curious to see what they can do in the next few years to break into this market.

Here's an interesting short editorial about Intel's recent purchases:

http://pcper.com/news/Editorial/Intels-recent-purchasing-habits-could-crossdressing-be-their-future

It seems Intel is intent on expanding their arsenal in ways no one else can.
 

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
ARM does less with less. ARM cannot magically build a quad-core CPU that is on-par with a SB and still have it use 2W of power. They simply offer poor-performing processors that use a low amount of energy.

The ARM instruction set is more efficient than the x86 set. So an ARM cpu can perform better than an comparable x86 cpu. Sure they will not create a SB level cpu only using 2W of power, but they will blow the doors off the Atom at the same power levels.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
According to the Q2 conference call transcript, Interlagos will "deliver up to 35% performance improvements compared to our current offerings."

I could have sworn that JFAMD claimed 50% higher server performance for the longest time..... Isn't this an indication that Bulldozer is having performance problems probably related to clockspeed??

JF may be joining his old boss in my sig.
 
Mar 10, 2006
11,715
2,012
126
The ARM instruction set is more efficient than the x86 set. So an ARM cpu can perform better than an comparable x86 cpu. Sure they will not create a SB level cpu only using 2W of power, but they will blow the doors off the Atom at the same power levels.

On what do you base this on?
 

Dresdenboy

Golden Member
Jul 28, 2003
1,730
554
136
citavia.blog.de
According to the Q2 conference call transcript, Interlagos will "deliver up to 35% performance improvements compared to our current offerings."
...
http://seekingalpha.com/article/281...sses-q2-2011-results-earnings-call-transcript

It reads
The Interlagos platform is our first server offering, optimized for today's cloud datacenters and the architecture excels at compute-intensive and HPC workloads, where it will deliver up to 35% performance improvements compared to our current offerings.
So does "where" refer to HPC and compute-intensive or to cloud datacenters? And how could the "and" be interpreted?
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Yes, they are saying in tasks where you can fully load up all the Interlagos cores it is roughly 35% faster than Magnycours (sp?). Is this statement comparing at same clock though? 33% more cores and 35% faster in heavily threaded tasks, then it does seem there may be problems hitting the frequencies they expected with the BD design.
 

Davidh373

Platinum Member
Jun 20, 2009
2,428
0
71
Can we get back on topic please? AMD Bulldozer and how the numbers put it in competition with Intel in the market? With all do respect, start your own thread about ARM and GPUs. Thanks!
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
Opteron FX?

Through this highly ambitious plan - and that might be an understatement - from 1 July 2011 through 30 June 2012 the company will cease selling any of its familiar desktop brand names. While no information is available on the server segment naming schemes, the grapevine has been chirping "Opteron FX".

And, as claimed earlier this year and widely ridiculed, it does in fact appear that AMD will be 100% transitioned to 32nm in 1 year.

http://www.theinquirer.net/inquirer/news/2096203/amd-shift-fresh-naming-scheme
 

Davidh373

Platinum Member
Jun 20, 2009
2,428
0
71
You have real BD numbers?

No, but the title of this thread is "Fudzilla: Bulldozer performance figures are in." Let's stay on topic. ARM and GPUs have nothing to do with Bulldozer other than AMD. I don't see why people have to bring the company's other products into a debate when clearly bulldozer is a desktop processor. It isn't a cell phone, and it's not a graphics card. I'm fine talking about the company that makes it, and competing companies which have products that directly compete with it. Intel's SB and SB-E as well as Haswell will all compete and be compared with Bulldozer and it's future revisions. I don't see why people need to turn this into an orgy of fanboyism and snipe at one another.
I'm not a fanboy for either side, but for good reason I find myself agreeing with the Intel side. The fact AMD people are screaming over these numbers like it's the most amazing and unbelievable thing ever is clearly over-exaggeration. Those people make themselves look bad. Like you said, there are no official numbers yet, only rumored ones. Given they are rumored numbers, I would think they're more likely to be twisted in a positive direction for AMD.
The fact is both Intel and AMD have their ups and downs. So let's take the (rumored) numbers we have with a grain of salt, and (hopefully) come to the consensus that fanboyism is ignorant and foolish, and getting your hopes up for Bulldozer or SB-E to kick the other's ass is up there on the same level. They both make good processors, but normally for different markets. Let's be grateful for our budget Phenom IIs and our performance Intel products and stop trying to expect them to flip flop just so you can FINALLY get the numbers from the company you desire. Buying the best product is supposed to drive the market in the direction you want it to go. I have to wonder if all the fanboys in the world just bought what was best instead of biting their lip and going for either a worse product or a much more expensive one that both Intel and AMD would be closer in the race, and probably make even larger gains in that race. If you AMD people want a performance processor more often, buy an Intel every once in a while to show them you want them to put more effort into R&D. If you Intel fanboys are fed up with how much Intel charges for only slight gains, buy an AMD. I have both in my systems because they both excel in different areas. Buying a $350 Bulldozer over a $200 Intel 2500K with a small margin of performance gain just to give AMD your money is flat out stupid if you aren't even going to use it, just like buying a 990X from intel to get an extra 2 cores for an extra $750 is stupid. Just chill on the fanboyism already!
 
Last edited:

podspi

Golden Member
Jan 11, 2011
1,982
102
106
JF may be joining his old boss in my sig.

JF's comments were made before current top-bin MC's were released. He's said a few times that the 50% comment should only be compared to the top-bin MC at the time he said that.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
The ARM instruction set is more efficient than the x86 set. So an ARM cpu can perform better than an comparable x86 cpu. Sure they will not create a SB level cpu only using 2W of power, but they will blow the doors off the Atom at the same power levels.
The usual numbers always thrown around on comp.arch are about 5% for the decoding of x86 (actually that was before SB, so now with the cache it should be a good bit lower). So your "blow the doors off" actually means "maybe 5% less energy, sadly offset by a inferior node".

Sounds a good bit less impressive with numbers in there if I may say so.
 
Last edited:
Status
Not open for further replies.