• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD announces two new FX processors

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
more nonsense from you? when you quoted me I was talking about an overclocked Bulldozer cpu compared to an overclocked Sand Bridge cpu. if you actually look at the link, the difference in power consumption is massive. 270 additional watts for worse overall performance than the 2500k or 2600k is beyond silly. you ignorant comment about my gtx570 is irrelevant. my card is performs close to the 6970 while costing me 100 bucks less. and only uses about 20 watts more than a 6970. next time make a better effort to troll. 🙄

Huh, you were compating the power usage of an overclocked bulldozer, but when we switch to your video card you only want to talk about stock speeds.

That's funny, I wonder why?

http://www.legitreviews.com/article/1763/13/

394 watts. I guess you might even call that power consumption massive.
 
Yes. I have a slide out keyboard drawer that blocks the heat from rising. So it gets really toasty.

How about you open your eyes instead of rolling them. You might learn something.

Pool- a small and rather deep body of usually fresh water

Poor Phynaz. I'd hate it if it got so hot that a small and rather deep body of usually fresh water of heat formed at my desk. Luckily my FX-8120 runs a lot cooler than his PC, despite the reviews, because while it does get a little warm under load it's never formed a small and rather deep body of usually fresh water of heat.

If you're going to be a jerk, this is the wrong place; try P&N.
-ViRGE
 
Last edited by a moderator:
Huh, you were compating the power usage of an overclocked bulldozer, but when we switch to your video card you only want to talk about stock speeds.

That's funny, I wonder why?

http://www.legitreviews.com/article/1763/13/

394 watts. I guess you might even call that power consumption massive.
wow you are even more ignorant than I originally thought. that gtx570 system that is using 394watts in furmark is overclocked while the 6970 is not? of course it will use a ton of watts like that. the 6970 they used throttled itself so as not to use a ton of power in furmark. gtx570 uses 21 more watts than the 6970 in Crysis. 🙄 http://www.anandtech.com/show/4061/amds-radeon-hd-6970-radeon-hd-6950/24

again I was replying to that guy about TDP and he had his Phenom oced so I mentioned why many people do care. since you are really thick let me break it down for you again. many people on here care about power consumption if it goes through the roof when overclocking like it does with Bulldozer. at stock speeds its not really an issue.
 
Last edited:
at stock speeds its not really a huge issue for Bulldozer. the guy I was replying to was talking about TDP and his Phenom being overclocked so I mentioned the difference between an overclocked Bulldozer and Sandy Bridge. but hey if you think 270 watts more for worse overall performance is no big deal then go for it.

I'm not saying that and I'd appreciate if it you wouldn't imply that I am.
 
Toyota, Chiropteran, Vic Vega; you are all hereby banned from this thread.
-ViRGE
 
wow you are even more ignorant than I originally thought. so they disabled the power saving features and ran furmark on it? of course it will use a ton of watts like that. a gtx570 uses 21 more watts than the 6970 in Crysis. 🙄

again I was replying to that guy about TDP and he had his Phenom oced so I mentioned why many people do care. since you are really thick let me break it down for you again. many people on here care about power consumption if it goes through the roof when overclocking like it does with Bulldozer. at stock speeds its not really an issue.

Regarding the content of your post...

Do you understand the concept of an apples to apples comparison? Furmark is an unrealistic situation, yes. Guess what else is an unrealistic situation? Prime95 stressing all cores.

If you want to argue that your GTX 570 actually uses a lot less power in a game like crysis, it would help your case to show how much power the bulldozer cpu uses in crysis, as opposed to an artifical benchmark that loads all 8 cores.

ediot: bleh, I see I am "banned" from this thread, however I was in the middle of typing and hit submit before seeing the message. Oops, too late.
 
Personally I'm glad they're releasing higher clocked parts, even if it is 125w. OCing results in a big increase in power consumption though for BD which sucks. I hope they get it more under control when they release Piledriver.
 
If im not mistaken, PD will only double the ITLB from 32 entries to 64 entries
Both L1 Icache (64kb) and L1 Dcache (16kb) will remain the same as in BD.
 
I care because I don't like sitting sweating in a pool of heat. And I also don't like paying extra for all the additional power.

I suppose that the Willamette/Northwood Xeons heat was more
safe and cooler to your liking than the FX BD one , you may even
add that it was usefull in winter , hence the extra bill was no problem...

Phynaz , although you have some clues about markets needs ,
and even power usage is an important feature, you are not
credible on this one..
 
If im not mistaken, PD will only double the ITLB from 32 entries to 64 entries
Both L1 Icache (64kb) and L1 Dcache (16kb) will remain the same as in BD.
Any references?

BTW now I find calling Piledrive PD is a little bit strange..I mean, I was thinking about Pentium D when I first read your post.
 
If im not mistaken, PD will only double the ITLB from 32 entries to 64 entries
Both L1 Icache (64kb) and L1 Dcache (16kb) will remain the same as in BD.

more than that...

a 10% deeper FPU load queue
front-end's branch prediction have 5 SRAM (1 more than BD)
new turbo
clock mesh
 
If AMD is able to raise clocks with the same TDP then it is a win.. mind you this is not a new revision nor has any newer technology such as resonant clock been used in this..

All in all a price cut and better clocks.. AMD is regaining VFM tag again
 
For those of us with AM3+ boards who don't wish to dump everything to go Intel, hopefully PD will be worthy. For my day to day stuff a lowly Llano chip would suffice. What I'm looking for is an upgrade for games. BD sucks badly there. I really hope PD improves on that.

Are any games set to use AVX? That's touted as a strong point of BD, but is it actually usable?

On an aside, When reading CPU reviews I disregarded tests like Monto Carlo as something I would never deal with. However, just recently my wife, who is getting a PhD, is using Monte Carlo for some of her models. I guess the Monte Carlo benchmark is a valid test of CPU usage, at least in my house.
 
IF they can improve L1 Write Latency (hello Write-Back cache, we've missed you), eliminate the WCC altogether
Unlikely to happen soon, IMO. They choose this design with a reason - the combination of L1-WT with the WCC simplifies the design of the overall CPU, as 2 cores appear as a single entity in regards to cache coherence. For cache-writes, the WCC acts like a single core towards the rest of the CPU. A change of the cache structure to a "normal" design would complicate cache snooping and uncore design. So it's not even clear if this change is even desirable, if the additional complexity would result in worse multi-threaded scaling. AMDs CPUs do not scale as well as Intels, correct?

I am going to hope they pull a Phenom II out of FX2. That is a brighter future to consider, IMHO, than the one where they just give up.
I think they did? "It's not gonna be AMD vs Intel anymore". Sounds pretty clear to me. Which is actually smart; I think AMD will be able to thrive on cheap mass-market products, but has no chance to go against Intel in the performance race anymore.

Also, PD is supposed to be up to 15% higher ipc performance.
Source? The only AMD statements with "15%" I know of say that
a.) perf/watt will annually increase by 15% for the next three or so years
and
b.) Trinity will be 15% faster than Llano on x86 ode.

Both statements are completely believable. They do not indicate any significant IPC increase, if only for the fact that Llano tops out at 2.9 GHz and is generally competing with the i3-2100. Being 15% better than that should be achievable, even for AMD.
 
sandra%20multimedia%20avx.png


Games are generally FP heavy, so you should look more closely at the FPU performance, which isn't as great as integer. Four 256bit FPUs compared to 8 separate integer makes for =( Hoping they've wised up and offered more than 4 FPUs come Piledriver
 
I suppose that the Willamette/Northwood Xeons heat was more
safe and cooler to your liking than the FX BD one , you may even
add that it was usefull in winter , hence the extra bill was no problem...

Phynaz , although you have some clues about markets needs ,
and even power usage is an important feature, you are not
credible on this one..

What in the world are you talking about?

Quit being a fanboy for a minute and review my posts. You'll see I run AMD video cards because they consume less power.
 
Last edited:
Unlikely to happen soon, IMO. They choose this design with a reason - the combination of L1-WT with the WCC simplifies the design of the overall CPU, as 2 cores appear as a single entity in regards to cache coherence. For cache-writes, the WCC acts like a single core towards the rest of the CPU. A change of the cache structure to a "normal" design would complicate cache snooping and uncore design. So it's not even clear if this change is even desirable, if the additional complexity would result in worse multi-threaded scaling. AMDs CPUs do not scale as well as Intels, correct?

Yeah, cache coherency is going to be the issue there, but you'd think that would be better than making the L1 cache essentially a read only one. Simple is great until simple is stupid slow and hamstrings all sorts of scenarios, including starving your FPU.

From what I have read, and AtenRa did nice write up on this, CMT scales very, very well. Nice and linear. Better than Intel + HT, not sure about just pure core scaling.

Ultimately, "fixing" anyone aspect will undoubtedly reveal other issues. The L1 cache performance is just glaring, IMHO and has to be a priority from and engineering and transistor budget.

Not a managing engineer at AMD though, so what do I know? 😛 Not much, admittedly. My BA in CS only touched on this during one class for one semester...
 
sandra%20multimedia%20avx.png


Games are generally FP heavy, so you should look more closely at the FPU performance, which isn't as great as integer. Four 256bit FPUs compared to 8 separate integer makes for =( Hoping they've wised up and offered more than 4 FPUs come Piledriver


You consider this graph as representatrive of theses CPUs FP perfs???..

Here the link , people can judge by themselves if BD Fpu is good enough.....

http://www.tomshardware.co.uk/fx-8150-zambezi-bulldozer-990fx,review-32295-14.html
 
Back
Top