• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Donanimhaber FX8150 Video Review/Benchmarks!

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.
aigomorla. Wow. Just wow. Asserting that you are correct then basing those assumptions on "Why-therefore" defenses.

I think your words are wrong. You may end up right in the long run but coming here blasting AMD for a product a lot more people will enjoy and praise on other merits than gaming and how fast you can steal backup a movie. And to do it by asking us to make assumptions based off no official word is weak.
 
based on these observations I'll say that Bulldozer supports AVX-256 just for compatibility sake but it is probably better (TBC) to not enable AVX-256 for Bulldozer targets. It gives a refreshing new perspective on the issue of the Intel compiler enabling SSEx optimization only on Intel CPUs, since in this case it may well be a *legit optimization to disable AVX-256 for Bulldozer*, i.e. not only rely on the features flag but to look at the manufacturer string ("Genuine Intel", "Authentic AMD")

Not sure but it seems to me that AMD does recommend to use
AVX 128 preferably to AVX 256 for their bulldozer CPU and
possibly that even Intel recommended to optimise with 128b
rather than 256b AVX for their SB line..
 
The only way I see Bulldozer success (if benchmarks are true) with this revision is with a multi-processor unlocked version for < $200. I'd actually buy/build a dual socket rig, if a M/B manufacturer would make a cheap board (figure $550 for a dual socket M/B + 2 procs), ala the ABIT BP6 + dual PIII / Celeron situation. It would then compete with Xeon and hexacore desktop chips. But since it would hurt Opteron, I'm not hopeful.
 
It's definitely not that bad!

There's about a -300Mhz performance differential.

For example, the leaked 3.6Ghz FX-8150 scored about the same as a 3.3Ghz 1100T on Cinebench 11.5.

And in our testing of the 3.1Ghz FX-8120, it scored about the same as a 2.8Ghz 1055T.

Still, that IS bad if you consider these things are supposed to be 8 cores, not 6. They're comparable to existing 6 cores.
 
It's definitely not that bad!

There's about a -300Mhz performance differential.

For example, the leaked 3.6Ghz FX-8150 scored about the same as a 3.3Ghz 1100T on Cinebench 11.5.

And in our testing of the 3.1Ghz FX-8120, it scored about the same as a 2.8Ghz 1055T.

Still, that IS bad if you consider these things are supposed to be 8 cores, not 6. They're comparable to existing 6 cores.
It's worse if you compare the same number of cores. There were Fritz benchmarks for the FX-4170 (4.2GHz base, 4.3GHz turbo) for example that showed it getting 15.28x. For comparison, my X4 955 at 3.2GHz gets 15.03x. Just one benchmark, but the others seemed to be showing the same thing, per core and per clock Bulldozer was significantly behind Phenom II.
 
You said BD is slower than an X6, even with 50&#37; higher clocks, are you still saying this? Wait, you actually said twice the clock speed.

Twice the clock speed is 100% higher clocked....

With such claims , i wouldnt dare to post anymore for many
years if not definitly , yet the guy still insist..

amazing , really...
 
where? do you have a link to provide?

On Fri, Feb 11, 2011 at 1:46 AM, Richard Guenther <rguenther@suse.de> wrote: >> Attached is the patch to force gcc to generate 128-bit avx instructions for bdver1. We found that for >> the current Bulldozer processors, AVX128 performs better than AVX256. For example, AVX128 is 3% >> faster than AVX256 on CFP2006, and 2~3% faster than AVX256 on polyhedron. >> >> As a result, we prefer gcc 4.6 to generate 128-bit avx instructions only (for bdver1)


http://patchwork.ozlabs.org/patch/82705/
 
It's worse if you compare the same number of cores. There were Fritz benchmarks for the FX-4170 (4.2GHz base, 4.3GHz turbo) for example that showed it getting 15.28x. For comparison, my X4 955 at 3.2GHz gets 15.03x. Just one benchmark, but the others seemed to be showing the same thing, per core and per clock Bulldozer was significantly behind Phenom II.
The thing is, it doesn't have 8 full "cores"; it's got 8 integer units and 4 floating point units, so it's effectively bottlenecked by the 4 fpus.

It seems like a dumb design to me. I don't know what AMD was thinking.

I'm not upgrading my system until I can get 8 true cores running at 5ghz or higher.

I'm not surprised to see 6 true cores essentially matching Bulldozer at this point. It's going to require specialized software to fully exploit this new CPU, and seeing as AMD is making it, the chances of that actually happening are slim to none.
 
WTB benchmarks, post release, from a reputable site plz thx. Amazing how people get so bent outta shape over hype, and speculation.
 
It's worse if you compare the same number of cores. There were Fritz benchmarks for the FX-4170 (4.2GHz base, 4.3GHz turbo) for example that showed it getting 15.28x. For comparison, my X4 955 at 3.2GHz gets 15.03x. Just one benchmark, but the others seemed to be showing the same thing, per core and per clock Bulldozer was significantly behind Phenom II.

http://www.3dcenter.org/news/2011-09-22

AMD is smoking potent crack if they selling FX-6100 at $175, since 2500K is guaranteed to be much faster for just $40 more. The woeful FX-4170 is only worth $100 tops.

Funny how AMD wanted to increase their ASPs with BD only to stuck at bargain basement pricing again.
 
Hat Monster at Ars Technica got to play with a review sample.

Got to play with a review sample. Not mine. I'm not NDAed.

The figures above posed by w00key do seem legit, but they're worst cases. In most games, BD trades places with a 2500K, in my own Fallout New Vegas testing, it beat my Phenom II X4 (3.7 GHz) by around 20&#37;. The 2500K beats my Phenom II by the same 20%.

It's hilariously overclockable. Jury-rigging the biggest heatsink I could find, a ~7 year old Coolermaster Hyper6+ (which doesn't fit AM2/3) hacked into a pretty cheap Gigabyte GA990XA-UD3, I got 4.85 GHz out of it. At that kind of clock, the 2500K was looking for its parents with tears in its eyes, losing out on single threaded benchmarks by 10% and multithreads.... well, it wasn't really funny anymore.

Looks reasonably good to me 😎 , then again I'll care mostly for its Skyrim performance and I kinda expect this to be reflected in the Fallout New Vegas output.
 
Last edited:
LAB501 is a great review site, definitely not a site that does rubbish. There's no point clinging to false hope. BD is pure fail.

Considering there have been pretty much no chips handled by anyone, outside of engineering samples....I stand by my previous. I didnt say the site wasn't reputable. I STILL want post release, consumer chips reviewed by legit sites.
 
IF the results we are seeing are truly indicative, then AMD really chose the wrong marketing approach--for informed users anyway. Obviously the idea of a module is new to most of us, so perhaps they didn't know how to market it that way.

However, their choice to market it by the number of cores is ill advised if the 8 core CPU can just barely match their previous 6 core CPU. Thats a hard sell to enthusiasts. Of course it does have more headroom through overclocking, but not everyone overclocks.

Then again, the majority of people who don't overclock are average users--the majority of consumers actually. Those same people are the ones who will see "8 cores unlocked" and think they have a beast. So perhaps they are marketing it correctly, for the mass market anyway. When I consider it that way, I can see AMD selling a lot of them, especially to the average Joe since those people buy way more hardware than all of the enthusiasts combined. We might buy more hardware per year, but there are simply more of those kinds of buyers than us.

But informed users who decide they just have to stick with AMD, will probably just go for the cheaper X6's. Of course I suspect AMD will quickly phase out the X6. They will have to if they plan to sell any of the BD's to informed enthusiasts. Seriously, how well will the 6 core BD fare against the X6 if the 8 core is right with it in the leaked benches so far?

Then again, this is all dependent on whether or not the leaked performance is what we will end up with at launch. Perhaps AMD does still have a trick up their sleeve. The next few days should be interesting.

The more I think about it, BD should be a success. As long as OEM's are willing to put it in their machines and do so in mass numbers, it will do fine. The OEMS know their customers and I suspect that most of them are of the mindset that more is better. With that in mind, AMD's choice to focus their marketing on the number of cores makes sense. Even the 6 core BD, the average person will see that and see that it has more cores than a quad core.
 
Hat Monster at Ars Technica got to play with a review sample.



Looks reasonably good to me 😎 , then again I'll care mostly for its Skyrim performance and I kinda expect this to be reflected in the Fallout New Vegas output.

I need to see a comparison between a 2600K and the 81** both O/C'd to around 4.5GHz running renders in apps like C4D, 3DMax, Blender, Modo, Softimage etc. (any of the above will do). If BD is faster in these multi-threaded loads it's what I need. Gaming is nice to see as many of these apps editors aren't that much different than game engines. Although they are typically GPU bound there.
 
I need to see a comparison between a 2600K and the 81** both O/C'd to around 4.5GHz running renders in apps like C4D, 3DMax, Blender, Modo, Softimage etc. (any of the above will do). If BD is faster in these multi-threaded loads it's what I need. Gaming is nice to see as many of these apps editors aren't that much different than game engines. Although they are typically GPU bound there.

Agree. I am in the same boat as I am about ready to update my rendering farm again. I have waited to see how BD performs before making a decision. If BD doesn't keep up, then the decision I had already made--to go with 2600k's, will be what I do.

Although to be honest, I generally don't overclock my rendering machines, sort of like server workloads, I prefer lower or stock clocks and stability. The main workstation, yes I will overclock, because doing test renders is often quicker on one machine than sending it over the network and waiting to inspect the image--depends on how heavy the scene is.

But keeping stable overclocks on several machines at once is not something I like to do. Stability is key for me as I sometime sell rendering time to another small local studio if they are behind. I hate having to chase down corrupt frames, especially when the work isn't mine.

The machines render faster when overclocked for sure, but I have had times in the past when they go to send their frames back across the network, something goes wrong. I don't know if the overclock caused the NIC's to have problems or what. But I have never noticed it when running everything at stock.

To be honest though, that was a long time ago when I first encountered that, but since then, I have always went stock with the rendering machines in the farm. It may have been something inherent in those first machines many years ago and the way I had it setup. But ever since then I have just went stock when upgrading.
 
Last edited:
I need to see a comparison between a 2600K and the 81** both O/C'd to around 4.5GHz running renders in apps like C4D, 3DMax, Blender, Modo, Softimage etc. (any of the above will do). If BD is faster in these multi-threaded loads it's what I need. Gaming is nice to see as many of these apps editors aren't that much different than game engines. Although they are typically GPU bound there.

What does this actually mean, if accurate;?

x264 encoding
1080p
BD 4.85 12:33
2500K 3.8 18:56
PII X4 3.7 21:30

720p
BD 4.85 8:40
2500K 3.8 11:01
PII X4 3.7 15:15

Would have been nice to have BD results from 3.7 to 3.8 also
 
What does this actually mean, if accurate;?

x264 encoding
1080p
BD 4.85 12:33
2500K 3.8 18:56
PII X4 3.7 21:30

720p
BD 4.85 8:40
2500K 3.8 11:01
PII X4 3.7 15:15

Would have been nice to have BD results from 3.7 to 3.8 also

Where did you get these numbers? My 2500K gets 7.02 fps in x264 encoding with a 1080p test video. Certainly nowhere close to 3.8.
 
What does this actually mean, if accurate;?

x264 encoding
1080p
BD 4.85 12:33
2500K 3.8 18:56
PII X4 3.7 21:30

720p
BD 4.85 8:40
2500K 3.8 11:01
PII X4 3.7 15:15

Would have been nice to have BD results from 3.7 to 3.8 also
if that's the relative recoding performance i'll think seriously about getting one


Where did you get these numbers? My 2500K gets 7.02 fps in x264 encoding with a 1080p test video. Certainly nowhere close to 3.8.

i think those are ghz

(not to mention that different recode tests may result in more or less fps depending on various factors)
 
Last edited:
Hat Monster at Ars Technica got to play with a review sample.



Looks reasonably good to me 😎 , then again I'll care mostly for its Skyrim performance and I kinda expect this to be reflected in the Fallout New Vegas output.

Well we have just one more day, because this is pretty confusing. We have people like MM and Hat, who say it peforms pretty decent, but then you see all these other benchmarks which casts it in such a dark light. I don't know who is being accurate.
 
Status
Not open for further replies.
Back
Top