• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

AMD QUAD 9850BE VS INTEL QUAD Q9650 FASTER? FLOPS!!

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

covert24

Golden Member
Feb 24, 2006
1,809
1
76
low post count doesn't directly correlate with insufficient knowledge, usually.

and when my FPS in COD4 become based on FLOPS i will then, and only then, switch over to AMD...
 

magreen

Golden Member
Dec 27, 2006
1,309
1
81
Originally posted by: piesquared
Really, I don't think there is any greater testimony to this having real merit in the real world than the quick response from the viral marketers and the "Intel retail edge" crowd, attempting to make these numbers insignificant. The numbers speak for themselves, nothing more, nothing less. It is the Intel camp turning this into something it isn't. To those I say relax, go take your SuperPi for a few laps around the track to make yourselves feel better. Those that are interested in FLOPS, which by the way DOES have real world implications, may find this interesting.

On a side note, I wonder where companies like SiSoft earn revenue? From all the benchmarkers on the web that run these programs who pony up barrels full of cash? Or maybe from the ten or so website's that use them in their reviews? Maybe it's part of the professional's criteria in determining which processor best suits their needs? I don't think it's open source? Obviously the insignificant amount of revenue generated from such benchmarks as Everest, SiSandra, and the like are not enough to warrant development of complex benchmarking programs. It doesn't take a genius to figure that out. Hint: The PCMark controversy a month or so back should be a pretty good clue.
So, the OP's compiler statement holds a fair bit of merit. When programs that do the same task vary so wildly in performance between Intel and AMD, it is irresponsible not to take notice.

I agree. It was multiple shooters that killed JFK, aided by a UFO and the Israeli Mossad after Kennedy tried to expose their nuclear research center.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,073
3,576
126
Originally posted by: covert24
low post count doesn't directly correlate with insufficient knowledge, usually.

fact, however look at how they treated me over at EOCF when i visted there on a friends request regarding h2o cooling.

LOL

it doesnt mean you dont have knowledge but its stereotyped as hell to represent it.

:T
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
The question should be instead of bantering the poster but rather ask the question why can't common code take advantage of this? I have always thought and always will that game company's force the coder to make a quick attempt at something and call it a day.

I bet if AMD paid a few developers to optimize for AMD and still use the same structure as we do now for Intel that AMD would be in the lead. The same can be said for Intel but I bet they already do this.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: piesquared
So, the OP's compiler statement holds a fair bit of merit. When programs that do the same task vary so wildly in performance between Intel and AMD, it is irresponsible not to take notice.

I would venture that when two processors running @ roughly the same speed, one of which clearly has a superior design on paper (that wold be the X2 & Phenom, BTW), yet one of them performs somewhat poorly in almost every app, discounting all benchmaks, it would be irresponsible of us enthusiasts not to take notice. And yeah, FLOPS count for shit, in the real world, where only performance matters.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
I'll guarantee you FLOPS mean a hell of a lot more than the decade old code used in SuperPi.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: piesquared
I'll guarantee you FLOPS mean a hell of a lot more than the decade old code used in SuperPi.

Sure, but FLOPS still mean absolutely nothing. Performance in apps is all that matters, I can assure you.
 

magreen

Golden Member
Dec 27, 2006
1,309
1
81
Originally posted by: Idontcare
Originally posted by: magreen
LOL. Great thread, OP. Thanks for the post! Good for quite a few laughs.

Reminds me of these cheap little Chinese toys I got my 3-year-old son. They were plastic lookalikes of food -- little carrots, vegetables, the like.

After I bought it, I read the box. One of the slogans was "Great food. Pleasure every orifice." :roll:

Which begs the question...did you follow the directions on the box?

And what was your consumer review - satisfied or dissatisfied with the product's performance?

I'll have to ask my son. They were his veggie toys ;)

I may still have them. Shall I put them up on FS/FT?
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
Originally posted by: myocardia

Sure, but FLOPS still mean absolutely nothing. Performance in apps is all that matters, I can assure you.

I don't agree. The masses were pumped up about benchmarks like SuperPi, and many used it as a metric to compare AMD and Intel performance. In fact it is still used as part of the performance testing suite of many reviews. As for real world performance, as I said, it depends on the software. There are suites that do the same task, and are wildly variable. Hence it's not the apps that matter, but the coding associated with it.

[edit]

I forgot to mention earlier about the controversy surrounding Intel's benchmarks running Cinebench R10 on Nehelam, which they apparantley switched to the R11 render engine. My god, how much will Intel be allowed to get away with. It surprises even me at the lenghs Intel will go to, to decieve and manipulate their user base.



 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: piesquared
On a side note, I wonder where companies like SiSoft earn revenue?
From stupid people who think synthetic benchmark scores matter? I've said before that I think it's irresponsible of review sites to post those results.

Originally posted by: Zstream
The question should be instead of bantering the poster but rather ask the question why can't common code take advantage of this? I have always thought and always will that game company's force the coder to make a quick attempt at something and call it a day.

I bet if AMD paid a few developers to optimize for AMD and still use the same structure as we do now for Intel that AMD would be in the lead. The same can be said for Intel but I bet they already do this.

It's just not that simple. Real programs do a lot more than straight arithmetic - for example, traversing a data structure (e.g. searching a linked list or a tree). Even when they're doing arithmetic, they may be using integers rather than floating point numbers, and FLOPS is only a measure of floating point performance. Slow JavaScript web pages are nearly 100% integer code. You could probably run gmail about as fast on a CPU with no floating point unit as you could on one with a high-end FPU.

Things in real programs that keep them from hitting theoretical FP throughput:
1) integer operations
2) dependent chains of instructions (no available parallelism)
3) memory access
4) branch prediction (every time a branch is reached in a program, a processor has to guess whether to follow the branch or not, and it's hard to guess right more than ~95% of the time; about 1 in 5 instructions is a branch, so even with a good predictor you still make a lot of mistakes and have to fix them up)

The SPEC benchmarks have had people (grad students, PhDs) optimizing them for years and still they don't reach the theoretical MFLOPS numbers. It's not just lazy programmers.

Discussing this further is a waste of time unless you understand programming. Do you know how to write a binary tree? Do you know how to sort numbers? (I'm not asking about using a library that does these things for you - I'm asking if you could do them from scratch).

Originally posted by: myocardia
Originally posted by: piesquared
So, the OP's compiler statement holds a fair bit of merit. When programs that do the same task vary so wildly in performance between Intel and AMD, it is irresponsible not to take notice.

I would venture that when two processors running @ roughly the same speed, one of which clearly has a superior design on paper (that wold be the X2 & Phenom, BTW), yet one of them performs somewhat poorly in almost every app, discounting all benchmaks, it would be irresponsible of us enthusiasts not to take notice. And yeah, FLOPS count for shit, in the real world, where only performance matters.

(I assume you're comparing C2D vs Phenom) Which part of Phenom is better on paper? The load-store unit (Intel reorders loads past stores speculatively; as far as I know, Phenom will only let a load bypass a store if the store's address is ready)? The branch predictor? The L2 cache size? The narrower peak decoder throughput? ;)

Try comparing Via's Nano to Phenom on paper.

Originally posted by: piesquared
I'll guarantee you FLOPS mean a hell of a lot more than the decade old code used in SuperPi.

As far as I can tell, nobody in this thread was stupid enough to suggest SuperPI actually matters.
 

Dadofamunky

Platinum Member
Jan 4, 2005
2,184
0
0
Originally posted by: BLaber
Run from here lol , I see all intel zombies trying to eat gogothing7 alive

Well, given he can't communicate effectively (at leat not in English, maybe he can in Korean), ummm, YEAH, I guess that's true. But since buying Intel doesn't make any of us a zombie, what's your excuse?

This thread needs to be locked and its OP banned.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
You know damn well that the majority of people can not code a decent binary tree structure. I do know we have alternatives/algorithms to deal with large binary trees, this is mute if the tree can fit in memory which can happen but rarely does. I do know how to help database and server problems as that is my specialty.

B/M/X and the many other tree structures help. The problem is that we do not deploy these algorithms properly, such as using a M-tree to do non data storage which is a big no no.

Oh and I guarantee you that at least 90% of software is not optimized for any given CPU.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Zstream
Oh and I guarantee you that at least 90% of software is not optimized for any given CPU.

It doesn't help us suspicious and paranoid dolts when that other 10% of the software which is CPU optimized turns out to be synthetic benchmark software...;)

But being faster at integer versus floating point has always (to my knowledge) been the necessary facet of a desktop x86 processor.

Remember those Cyrix chips? The PR number was based on performance in benchmarks heavily biased towards integer. Same was true for Winchip. They all sucked ass when it came to their FPU's. The K6 had this same dichotomy IIRC, strong integer capability to garner a good desktop performance against equivalently clocked pentiums, but suffered from lackluster FPU.