• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD FX-8120P benchmark from Coolaler

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Can you guys please not get so depressed please? You are depressing me as well, lol.

This is AMD playing as. Haven't you learned anything from past years?

The benchmarks may very well be real. This is not final hardware. JF warned us about all this. At the very least, wait for the launch day to get disappointed! 😉

No, AMD is not gonna get a magical 20% performance improvement just with a better BIOS microcode and some slight revisions. Just no. If they need so many revisions and so many excuses, it's because it just plain sucks.

That's what everything points to.
 
No, AMD is not gonna get a magical 20% performance improvement just with a better BIOS microcode and some slight revisions. Just no. If they need so many revisions and so many excuses, it's because it just plain sucks.
New revisions (silicon re-spin) would incur delays. Its probably why no release date was forthcoming months earlier. :hmm:
 
New revisions (silicon re-spin) would incur delays. Its probably why no release date was forthcoming months earlier. :hmm:

First AMD said it'd be out in Q2, then at the end of the quarter they said they'd be available in 60-90 days. Then, around 90 days later, they're saying in Q4. That should tell you something: it doesn't look pretty.

Module concept and very high clock speeds/deep pipeline was a risk that seems to have resulted in failure.
 
Honestly, SB is not much of an improvement over Nehalem in most cases as it relates to performance?

Digit-life found it to be 14-15% faster on average per clock.

Computerbase.de found a 2500k @ 2.8ghz to be 23% faster in applications vs. a Core i5 750 @ 2.8ghz.

That a lot after just 2-3 years in the CPU space.

Add to this that most i5/i7 (1st generation) chips maxed out at at 4.0-4.2ghz while SB goes to 4.5-4.7ghz. Taking midpoints of these ranges and applying a 15% IPC advantage of SB over Nehalem, we get:

4.6ghz SB + 15% IPC = 5.29 ghz Nehalem.

That's about 29% faster on avg! That's pretty good if you ask me.

Also, don't forget that 2500k 3.3ghz replaced Core i5 760 2.8ghz at the same price level, while 2600k 3.4ghz replaced Core i7 870 2.93ghz at the same price. So Intel instantly added 30-35% more performance for "free" when it retired the 1156 platform.
 
Last edited:
ughh its phenom 1 all over again....

Alright AMD, the joke is over. Where is the REAL Bulldozer chip?

They can't be serious if they developed an 8 core CPU that barely matches a 4 core SB and loses to their previous generation 6 core CPU in one of the better multi-threaded benchmarks 😕

intel_2425k_cine11.jpg

Source


The worst thing is that Bulldozer is supposed to be an architecture to be used as the foundation for some years. If the foundation is so weak, like the Pentium 4, the only saving grace is extremely high clock speeds. Even in that, the Pentium 4 was much better. While AMD only achieved 2GHz clock speeds with K8, you could reach 4GHz on the Pentium 4. With BD the clock speeds only seem to be around 500MHz higher.

IF the performance is as bad as these benchmarks, this really has me puzzled. Their 8 core = 4C SB without HT.

- Preliminary overclocking doesn't show any significant advantage (such as 5.5-6.0ghz on air); so it likely won't help since SB overclocks too
- Adding more cores isn't an option on 32nm; so they are stuck at 8 for a while
- Increasing clock speeds is only a temporary solution since Ivy Bridge will negate that too
- It doesn't seem that 4.0-4.2ghz Turbo will work on all 8 cores either

I really don't know how they are supposed to compete with this thing? It's fall of 2011. They only have 2 years to prepare for Haswell...which is likely going to bring another 15% IPC increase over SB.

If things get this bad, I hope ATI is somehow able to walk out of this mess alive.

Module concept and very high clock speeds/deep pipeline was a risk that seems to have resulted in failure.

I still don't understand how in a company of so many smart people, the majority voted to have a slow IPC 6-8 core CPU in 2011 while full aware that very few programs actually scale(d) to 6-8 threads. If these benches are real, I am shocked that such a critical design decision was made without as much as an effort put into improving IPC: adding 2 more cores while having worse performance than X6 is just startling....
 
Last edited:
incredibly frustrating. Hard to refute when 2 different groups give you the same thing.

Seriously, it has 8 cores on a new architecture, more cache, etc. and can't beat a 6 core thuban with only a few more 100 mhz? Just scrap that crap and go back with stars. This is a downgrade.

You may well remember, 4 yrs ago when Phenom was finally released after much "native true quadcore" hype, the collective response was "ugh, why, oh why, AMD? why didn't you just MCM a couple Athlon X2's if this was all you had planned to roll out after 4yrs of R&D".
 
I wonder when AMD made the decision to go with Bulldozer. Maybe it was during the early days of K10 where they got disenchanted with it and the paper advantages of BD looked compelling.

However, continuous improvements to K10 have made it quite successful today. Perhaps AMD badly overestimated the performance of BD, underestimated the eventual performance of K10 and while also developing both Bobcat and Llano at the same time, did not have the resources left to develop an upgraded Thuban once BD's performance became known ?
 
I still don't understand how in a company of so many smart people, the majority voted to have a slow IPC 6-8 core CPU in 2011 while full aware that very few programs actually scale(d) to 6-8 threads

Your first mistake is thinking a company is a democracy and people got to vote on this.

Second, BD would have been designed under the rule of Hector Ruiz, along with the current version of AMD video cards. At this time AMD was cutting costs, therefore transistor budgets.

Just as with the current generation of AMD video cards, it was all about die size at the time (and still is).
 
You may well remember, 4 yrs ago when Phenom was finally released after much "native true quadcore" hype, the collective response was "ugh, why, oh why, AMD? why didn't you just MCM a couple Athlon X2's if this was all you had planned to roll out after 4yrs of R&D".

Just because Phenom(Agena) was a FLOP of a CPU, doesn't mean Bulldozer will be one, or that AMD will repeat the same mistake.




i hope 😉
 
I wonder when AMD made the decision to go with Bulldozer. Maybe it was during the early days of K10 where they got disenchanted with it and the paper advantages of BD looked compelling.

However, continuous improvements to K10 have made it quite successful today. Perhaps AMD badly overestimated the performance of BD, underestimated the eventual performance of K10 and while also developing both Bobcat and Llano at the same time, did not have the resources left to develop an upgraded Thuban once BD's performance became known ?

They improved Llano's IPC about 5-6% in a short period of time. Even if they just added 10% IPC to Phenom II X6 and shifted it to 32nm @ 4.2ghz clock speeds, it would have been way faster (or so it appears based on these leaks).

When you have PHDs from Stanford, MIT, Waterloo, Berkley, etc. collectively working on a new high-level architectural design, how do you end up with a situation where an 8 core CPU is slower than your 2.5 year old 6 core CPU? 😵.
 
Last edited:
Your first mistake is thinking a company is a democracy and people got to vote on this.

I didn't literally mean "voted" in a ballot sense. The majority of high level management/engineers chose to go with this direction. Which means a certain consensus was reached across a group of these people. It's likely that some people strongly disagreed with this direction, but the number of those was fewer.

Second, BD would have been designed under the rule of Hector Ruiz, along with the current version of AMD video cards. At this time AMD was cutting costs, therefore transistor budgets.

But cutting costs is not a sound strategy if it also results in your revenue collapsing and profit margins falling if you can't sell your CPUs at reasonably high average selling prices (since they are uncompetitive). Based on rumored prices, BD should be faster than X6 CPUs. Perhaps the poor benchmarks are limited to Cinebench.
 
Last edited:
This does not look good for AMD.

Remember when AMD marketing released that picture of a Bulldozer running over a sand-bridge...it must have been Valentines day?

This is just too much if true....
 
First AMD said it'd be out in Q2, then at the end of the quarter they said they'd be available in 60-90 days. Then, around 90 days later, they're saying in Q4. That should tell you something: it doesn't look pretty.
Back then, there were the B0 stepping leaks came out (especially there's one IMHO is legit, the others I dismissed). Then, when the news of the delays rang, then it became obvious. Just hope AMD still keeps that Q4 date (or the earlier the better, this month perhaps). :thumbsup:

So is this FX-8130P or FX-8120P?
Its FX-8120 ES, the CPUZ is messed up a bit (possibly due to CPUIDs not finalized yet). :hmm:

I just hope the cpu engineers stay the hell away from the graphic card engineers...
AMD has different design teams for each project. 🙂

You may well remember, 4 yrs ago when Phenom was finally released after much "native true quadcore" hype, the collective response was "ugh, why, oh why, AMD? why didn't you just MCM a couple Athlon X2's if this was all you had planned to roll out after 4yrs of R&D".
MCM'ed Athlon X2s may not work, as it may end up like Quad FX situation. The MCM'ed package may even have very high TDP, having to sacrifice one memory channel per chip, higher RAM latencies over HT links, finding the optimal NUMA configuration, etc. Its not an easy decision. And your "prediction" of x86 version of Niagara may be realized. 😉
 
I didn't literally mean "voted" in a ballot sense. The majority of high level management/engineers chose to go with this direction. Which means a certain consensus was reached across a group of these people. It's likely that some people strongly disagreed with this direction, but the number of those was fewer.

How to lose your job: Say no to the boss. If it's what the CEO wants it's what you do.


But cutting costs is not a sound strategy if it also results in your revenue collapsing and profit margins falling if you can't sell your CPUs at reasonably high average selling prices (since they are uncompetitive). Based on rumored prices, BD should be faster than X6 CPUs. Perhaps the poor benchmarks are limited to Cinebench.

I agree, I never said cutting costs was a smart idea, it's just the conditions AMD was operating under at the time BD was conceived.

When I'm pushed for cost cutting measures that will affect the bottom line in the future I always use the oil change example. I can save money immediately by not changing the oil in my car. After a while that strategy will actually be very expensive in the end run.
 
Last edited:
They improved Llano's IPC about 5-6% in a short period of time. Even they just added 10% IPC to Phenom II X6 and shifted it to 32nm @ 4.2ghz clock speeds, it would have been way faster (or so it appears based on these leaks).

When you have PHDs from Stanford, MIT, Waterloo, Berkley, etc. collectively working on a new high-level architectural design, how do you end up with a situation where an 8 core CPU is slower than your 2.5 year old 6 core CPU? 😵.

Some dumb ass engineer convinced AMD sticking with the same number of FPU's and increasing the integer workload would do them any good. JFAMD stated way back that AMD believed 90% of desktop user workloads were integer based.

Bulldozer is not and 8 core chip in any sense of the word. It's a quad core, which is why it performs like one. It's looking the the Pentium 4 all over again, sell them on cores and or frequency and screw them on performance.
 
Just because Phenom(Agena) was a FLOP of a CPU, doesn't mean Bulldozer will be one, or that AMD will repeat the same mistake.




i hope 😉

Hey! I run a Phenom (Agena) 9850 clocked at 3,1 GHz to this day and can't complain one bit. 😎
 
AMD needs to go back to its old ways of solid engineering of balanced general-purpose processors, with a little help from Alpha magic and IBM process technology. Apparently they thought that they could best Intel with some FPU sharing tricks across "modules" and die size savings and the result is, by all accounts, a disaster.
 
The BD is not just a CPU. Isn't it a given that BD would be slower than SB since AMD dedicated a lot more die area for graphics, unlike SB?
 
When you have PHDs from Stanford, MIT, Waterloo, Berkley, etc. collectively working on a new high-level architectural design, how do you end up with a situation where an 8 core CPU is slower than your 2.5 year old 6 core CPU? 😵.

You can have all the collective brilliance of Nobel laureates in physics, chemistry and mathematics at your command, but if you suck at leadership and vision then they will surely flounder under your misguided directives all the same.

What made MS different from IBM? Or Intel different from AMD in the late 1970's and early 1980's? What made Nvidia different from Matrox?

Leadership.

And in Jan of this year, for some very good reason that's never been made all that clear to its shareholders, AMD's Board of Directors elected to dismiss the leadership that gave rise to Bulldozer.
 
The BD is not just a CPU. Isn't it a given that BD would be slower than SB since AMD dedicated a lot more die area for graphics, unlike SB?

BD doesn't have on die graphics. That's Trinity and Komodo.
 
Last edited:
Back
Top