AMD back in gear, Centurion FX

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

galego

Golden Member
Apr 10, 2013
1,091
0
0
You are forgetting that ultimately the people who buy the ships for productivity don't give a **** about that. All they care about is what gets the job done faster.

I'm not going to buy at chip that 'should' get the job faster. I'm going to buy the chip that gets the job done faster and makes me more money.

Benchmarks must be relevant to what is being done with the chip. I don't care if chip a is 40% faster in some open source bench if its 30% slower in cs6 and that's how I make my money.

In the first place you have avoided that I was replying a question about benchmarks asked by frozentundra123456.

In the second place, you continue ignoring that I am already considering cases where the chip gets the job done faster.

In the third place you ignore that most reviews use benchmarks to say people which chip they would buy. If some review uses a biased benchmark then that people is being cheated.
 

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
And yet you continue to read / post here. Says a lot doesn't it?
If I were to get offended by every little thing especially on the netz then it'd be very hard for me to do anything meaningful not to mention that I look at the bigger picture like how I said that the x86 market was shrinking & that the trend isn't reversible "Intel or not" :\
 
Last edited:

Sleepingforest

Platinum Member
Nov 18, 2012
2,375
0
76
Then we should compare 6-core Core i7 to FX8350 and HD7970 to HD4000.:rolleyes:

We would not compare a 7970 to a HD 4000. We would compare the top end Trinity. The flagship LGA1155 chip is the i7-3770K. If we really wanted to compare top end, we would do Opteron to Xeon.

Regardless, I'm saying that we need consistency. If we want to argue on price similar parts, then we should always do so. If we want to argue TDP, we should alwasy do so. And if we want to compare intended market, we should always fo so. We shouldn't flipflop.

AMD is reasonably powerful and certainly enough for mom/pop, but there is a reason why it costs less--it's typcially worse.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Intels compiler in Windows is the fastest for AMD chips.

If there was some magic and cheating, AMD should make a better compiler of its own. Yet they didnt, so AMD themselves dont seem to agree with you as well. Its not Intels job to do AMDs own job.

compiler.png

Except that Intel recognizes in its website that its compiler produces un-optimal code. They were obligated to introduce the disclaimed after the FTC settlement included a disclosure provision where Intel must:

publish clearly that its compiler discriminates against non-Intel processors (such as AMD's designs), not fully utilizing their features and producing inferior code.

I already mentioned two compilers which generate optimal code for both AMD and Intel chips.

As a final note your figure comparing with pgi c++ did me laugh.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,889
4,874
136
It is Intel's best vs AMD's best. That simple. Not too difficult to understand huh? Seems like you're pissed off because the gap between IB and Trinity is smaller than what we saw previously fastest SB vs fastest Llano (mobile).

Yet , despite better CPU perfs both SB and IB laptops with IGP
will be outdated long before Llano and Trinity respectively ,
so much for the great intel tech , partticularly with HD4000
wich to gain fps is using interpolation , that is , only part
of the frame is actualy rendered at the pretended fps....
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
We would not compare a 7970 to a HD 4000. We would compare the top end Trinity. The flagship LGA1155 chip is the i7-3770K. If we really wanted to compare top end, we would do Opteron to Xeon.

Regardless, I'm saying that we need consistency. If we want to argue on price similar parts, then we should always do so. If we want to argue TDP, we should alwasy do so. And if we want to compare intended market, we should always fo so. We shouldn't flipflop.

AMD is reasonably powerful and certainly enough for mom/pop, but there is a reason why it costs less--it's typcially worse.

Why we compare the FX8350 to Core i5 ?? because of similar price.

Why we compare A10-5800K to Core i3 ?? Because of similar price.

Why we compare HD7970 to GTX680 ?? Because of similar price.

Why we compare HD7790 to GTX650 Boost ?? Because of Similar Price.

Why we compare A10-4600M to Core i7 HD4000 ??? Because we compare Flagships :rolleyes:
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I already mentioned two compilers which generate optimal code for both AMD and Intel chips.

As a final note your figure comparing with pgi c++ did me laugh.

Are you saying Intel compilers dont make the best code for AMD chips on the Windows platform?

And why dont AMD supply a good enough compiler for their chips? Dont they care? Seems like either you are wrong with your bogus myths or AMD is a retarded company.
 
Last edited:
Aug 11, 2008
10,451
642
126
Suddenly , because it suit their agenda , the intel afficionado
see no relevance for power comsumption , an argument they
unrelentelessy used to death to trash Bulldozer...

I never said that power do not matter , that s just a plain
lie that you re putting in my mouth and then do as if
i could be countered with it....

I never said power consumption did not matter. It is you that are putting words into my mouth. All I said was it is top of the line vs top of the line. As others have said, there is nothing preventing AMD from making a 45 watt trinity. Personally to me it is a moot point because either is sufficient for normal use and if I were interested in anything but very casual gaming, I would want a discrete card on the level of a GT650M anyway.

Besides, why did you bring up this topic in the first place in a thread about a possible Centurion chip? It is not really relevant. All it does is show the paranoia of AMD fans who think everyone is against them.
 

cytg111

Lifer
Mar 17, 2008
26,256
15,666
136
That may have been the case in the past, but the latest gcc has reached parity on Intel and AMD x86-64 processors (at least it has on the programs I've tested on).

"reached parity" - what does that mean? I dont get it. That certain code executes in the same time on intel hardware x vs amd hardware y? Does not make a whole lot of sense.

But gcc/clang is problary the best metric of comparing the systems objectively in an academic matter.
But really, if Intel can produce a compiler that yields a 10x speedup on their specific hardware .. Hurray for them!
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
"reached parity" - what does that mean? I dont get it. That certain code executes in the same time on intel hardware x vs amd hardware y? Does not make a whole lot of sense.

It means a program compiled by gcc and icc with, for example Ivy Bridge optimizations, will have about the same performance on an Ivy Bridge processor.

For AMD processors icc doesn't have specific optimizations. So choose the closest equivalent compiler optimization and compile. That binary will be about as fast as the gcc compiled binary with specific processor optimizations.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
"reached parity" - what does that mean? I dont get it. That certain code executes in the same time on intel hardware x vs amd hardware y? Does not make a whole lot of sense.

It means the quality of the code is roughly similar.

By my limited observation (mainly looking at Phoronix scores..) this seems to be roughly sort of the case when the code doesn't vectorize well. ICC probably still has a big advantage in vectorization, which is also the only place where the processor-based code dispatch is likely to make any big difference.

From what I understand you can set ICC's compilation targets manually as opposed to using the runtime dispatch. Like you do with GCC since AFAIK there's no dispatch there. I wouldn't trust ICC's dispatch heuristics either but if you get the option of not using it it's no huge loss. It's not that hard to have an installer or similar dispatch multiple binaries if you want that.

The whole criticism of ICC having a bias on Intel vs AMD CPU performance comparisons always seemed really overblown to me because I'm not aware of ICC being that commonly used, in industry or otherwise. It's more the thing you might try if you're really desperate to make the code faster and have run out of easy ideas.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Intel 45W TDP HD4000 is loosing to AMD 35W TDP Trinity and you are saying that an Intel 35W TDP HD4000 will leave AMD in the dust? Are you joking or what ?? :rolleyes:

Edit:
Performance and Scaling Overview of Intel HD Graphics 4000


deusex_07.png


Its only one game but you get the picture, Intel doesn't what you to know that HD4000 in core i3 doesn't have the same performance as in Core i7. The majority of users believe that by getting an HD4000 with the Core i3 they getting the same performance they read in the Core i7 reviews.:whiste:

I meant cpu wise. The a10-4600m competes with i3 SV or i5 ULV. Its not even close to i7 quad performance. Not gpu wise.

Most mobile cpus have a gpu clock rate (excluding the super expensive i7's that can't be found in almost any prebuilt, 3720+) of between 1250 and 1100 (non ULV). Thats around 13% variation. The anandtech review uses a 3720qm at 1250 gpu clock rate. i3 SV clock rate is 1100 mhz, i5 is between 1200 and 1100. There really is not that much variation if you exclude the ultra high end (which almost no consumer uses). ULV is a different story.

Popular i7 quad models on the market
3610qm-1100mhz --very popular but refereshed to 3630qm
3612qm-1100mhz--35 watt
3615qm-1200mhz
3630qm-1150mhz--probably the most popular
3632qm-1150mhz--35 watt
3635qm-1250mhz

You will be hard pressed to find any higher models without custom ordering.

Now lets look at i5 and i3 SV

i5-3360m--1200 mhz
i5-3320m--1200mhz
i5-3210m--1100mhz
i3-2120m-1100mhz
i3-2110m--1100mhz

Really not much difference. 35 watt SV don't have the throttling problems of ULV. The only major change is 3MB cache on the i5/i3 and 6 on the i7. So yes they probably are getting very similar performance.

In the third place you ignore that most reviews use benchmarks to say people which chip they would buy. If some review uses a biased benchmark then that people is being cheated.

If i'm using adobe or photoshop then i don't care about benchmarks on things other than those. Its hardly biased to provide reviews on relevant tests. In the end it comes down to what you are going to get out of the chip, not what the chip can theoretically do.

My point is that few care about open source benchmarks because they are not going to be running that.

I highly doubt that there are many of these 'cheating' benchmarks.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Intels compiler in Windows is the fastest for AMD chips.

If there was some magic and cheating, AMD should make a better compiler of its own. Yet they didnt, so AMD themselves dont seem to agree with you as well. Its not Intels job to do AMDs own job.

compiler.png

I had no idea Portland Group had fallen so far behind in performance-optimized code for AMD microarchitectures. It was my fav compiler for K7's and K8's.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
Yet , despite better CPU perfs both SB and IB laptops with IGP
will be outdated long before Llano and Trinity respectively ,
so much for the great intel tech , partticularly with HD4000
wich to gain fps is using interpolation , that is , only part
of the frame is actualy rendered at the pretended fps....

So you like equal TDP comparisons huh?
At 35W the A10-4600M gets beaten by a i7 IB ULV (17W) and slaughtered by a 4C/8T i7 IB (35W) CPU wise. I'd rather take this huge CPU advantage than a smaller GPU advantage that still puts AMD's IGPs much below NVIDIA's midrange dGPUs (~GT650M) anyday (price differences aside).
 
Last edited:

galego

Golden Member
Apr 10, 2013
1,091
0
0
But really, if Intel can produce a compiler that yields a 10x speedup on their specific hardware .. Hurray for them!

And the point is missed once again... Nobody is worried if Intel can produce 10x speedup on their specific hardware, but if they cheat about other's hardware:

Unfortunately, software compiled with the Intel compiler or the Intel function libraries has inferior performance on AMD and VIA processors. The reason is that the compiler or library can make multiple versions of a piece of code, each optimized for a certain processor and instruction set, for example SSE2, SSE3, etc. The system includes a function that detects which type of CPU it is running on and chooses the optimal code path for that CPU. This is called a CPU dispatcher. However, the Intel CPU dispatcher does not only check which instruction set is supported by the CPU, it also checks the vendor ID string. If the vendor string says "GenuineIntel" then it uses the optimal code path. If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version.

And before some fanboy replies to this with some "nobody cares". Well, people who is not biased towards Intel or who is not payed by them cares:

I have complained about this behaviour for years, and so have many others, but Intel have refused to change their CPU dispatcher. If Intel had advertised their compiler as compatible with Intel processors only, then there would probably be no complaints. The problem is that they are trying to hide what they are doing. Many software developers think that the compiler is compatible with AMD processors, and in fact it is, but unbeknownst to the programmer it puts in a biased CPU dispatcher that chooses an inferior code path whenever it is running on a non-Intel processor. If programmers knew this fact they would probably use another compiler. Who wants to sell a piece of software that doesn't work well on AMD processors?

Antritrust bodies care as well. From the Federal Trade Commission:

Requiring that, with respect to those Intel customers that purchased from Intel a software compiler that had or has the design or effect of impairing the actual or apparent performance of microprocessors not manufactured by Intel ("Defective Compiler"), as described in the Complaint:

  1. Intel provide them, at no additional charge, a substitute compiler that is not a Defective Compiler;
  2. Intel compensate them for the cost of recompiling the software they had compiled on the Defective Compiler and of substituting, and distributing to their own customers, the recompiled software for software compiled on a Defective Compiler; and
  3. Intel give public notice and warning, in a manner likely to be communicated to persons that have purchased software compiled on Defective Compilers purchased from Intel, of the possible need to replace that software.

When one uses a compiler that does not cheat, the FX-8350 is able to compete with the i7-3770k in the average, being slower than the intel chip in some benchmarks but faster in others. Under fair competition, the FX-8350 beats the Intel chip in some benchmarks by a wide margin, up to a 70% faster!

The centurion chip could increase the above score up to 87% faster than the i7-3770k, which would put the centurion in the league of the Intel extreme but by a fraction of the cost of one of them.

Hexus claims that they have confirmation from "good authority" that the centurion chip will be released soon.

Toms says that would not be surprising that the centurion is shipped with the Never Settle Bundle promotion sometime this summer.

Xbitlabs remarks that the move with introducing the new centurion chip could be a similar to the introduction of the TWKR chips, "which were available in quantity of less than 100 units worldwide".
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
So you like equal TDP comparisons huh?
At 35W the A10-4600M gets beaten by a i7 IB ULV (17W) and slaughtered by a 4C/8T i7 IB (35W) CPU wise. I'd rather take this huge CPU advantage than a smaller GPU advantage that still puts AMD's IGPs much below NVIDIA's midrange dGPUs (~GT650M) anyday (price differences aside).

I really don't get why comparing comparable TDP-rated products has now become such a big deal when we just spent the past many weeks as a community arguing over the fact that TDP is not computed or specified in the same way by both companies :confused:

So we aren't allowed to hold AMD's feet to the fire in scrutinizing the relevance of their 125W TDP spec for Piledriver in comparison to Intel's 77W TDP spec for Ivy Bridge, but we are expected to scrutinize Anand's reviews on the basis that he didn't coordinate the review of two products with synchronized (but tirelessly argued to not be comparable) TDP spec values...

Am I the only one seeing this pattern and fallacy?
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Look, you can't take one benchmark and say it applies to everything.

You do realize that the 3930k is fully overclockable and $570.

If they are making only 100 chips then the chip, in the grand scheme of things, becomes rather irrelevant.
 

planetJ

Junior Member
Apr 19, 2013
1
0
0
I meant cpu wise. The a10-4600m competes with i3 SV or i5 ULV. Its not even close to i7 quad performance. Not gpu wise.

Of course not, it's not designed to...the A10-5750M, however, is designed to compete in that segment...don't compare products disproportionately aligned...simply because you can. I like AMD, but I don't compare the FX8350 to the 3930k...it's not in the same arena. The Centurion on the other hand...is in that arena...remarkably...it's comparably priced well below the i7-3960x

Most mobile cpus have a gpu clock rate (excluding the super expensive i7's that can't be found in almost any prebuilt, 3720+) of between 1250 and 1100 (non ULV). Thats around 13% variation. The anandtech review uses a 3720qm at 1250 gpu clock rate. i3 SV clock rate is 1100 mhz, i5 is between 1200 and 1100. There really is not that much variation if you exclude the ultra high end (which almost no consumer uses). ULV is a different story.

Popular i7 quad models on the market
3610qm-1100mhz --very popular but refereshed to 3630qm
3612qm-1100mhz--35 watt
3615qm-1200mhz
3630qm-1150mhz--probably the most popular
3632qm-1150mhz--35 watt
3635qm-1250mhz

You will be hard pressed to find any higher models without custom ordering.

Now lets look at i5 and i3 SV

i5-3360m--1200 mhz
i5-3320m--1200mhz
i5-3210m--1100mhz
i3-2120m-1100mhz
i3-2110m--1100mhz

Really not much difference. 35 watt SV don't have the throttling problems of ULV. The only major change is 3MB cache on the i5/i3 and 6 on the i7. So yes they probably are getting very similar performance.

Who told you that HD4000 or even HD 4600 is better than an AMD APU? Whoever that was...they lied to you. Compare frame rate benchmarks from popular games. You cannot run Crysis 3 at a playable frame rate on intel HD 4000 with an i7-3770k...on an AMD A10-5800k or A10-5750M you can A10-5800k integrated graphics can get 30-40 FPS in crysis 3 at medium settings...show me one intel that will do that with onboard GPU.



If i'm using adobe or photoshop then i don't care about benchmarks on things other than those. Its hardly biased to provide reviews on relevant tests. In the end it comes down to what you are going to get out of the chip, not what the chip can theoretically do.

My point is that few care about open source benchmarks because they are not going to be running that.

I highly doubt that there are many of these 'cheating' benchmarks.

Open source benchmarks are extremely useful...

Want to see some ICC compiled benchmarks?

Here are a few of the REALLY popular examples:
itunes
Cinebench R10
Cinebench R11.5

That's just a few...ICC isn't widespread in the windows world except for synthetic benchmarks. That's why the synthetic benchmarks for those in the gaming industry mean nothing. The attitude among developers is essentially..."Oh, look another intel...they didn't give us what we wanted on the chip again...*sigh* look at what AMD gave us, though!"

Do you think it's a coincidence that xbox720 and ps4 are on AMD hardware? I can tell you it's not...it's because AMD listens when people tell them what they want. Intel says, "Oh, sure we can do that...by the way...this is what this one has...we couldn't get around to the coding features you wanted for better efficiency...the architecture we developed is 7% better at cinebench though!"

Nobody realizes intel is the synthetic benchmark king, but it's smoke and mirrors...real world performance is drastically closer.

Also, for whoever said it...

AMD does support their own compiler...open64(open source). They issue AMD specifically optimized updates for it with each new round of new architecture. It's just not a high penetration compiler in the windows world...it doesn't cripple intel either by the way, but it does generate the most optimal code path available to the CPU, period. Which honestly levels the playing field. Ever wonder why ubuntu linux benchmarks of the 2 systems show increases in speed for both CPUs, and tend to be drastically closer? AMD gets a much bigger speed increase though if you notice...because linux is far more streamlined as an OS, and because it's not specifically optimized for one set of architecture. Also, the open64 compiler with AMD optimizations is as fast for intel as ICC, but it's just as fast for AMD, how do you like those apples?
 

CHADBOGA

Platinum Member
Mar 31, 2009
2,135
833
136
I must have some kind of virus that redirected my browser to AMD zone. WOW.
You gotta admit, they are always good for a laugh and do make things at times quite entertaining.

Obviously they have some mental issues, but you can't have everything. :D
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Here are a few of the REALLY popular examples:
itunes
Cinebench R10
Cinebench R11.5

That's just a few...ICC isn't widespread in the windows world except for synthetic benchmarks.

Those aren't synthetic benchmarks.
 

insertcarehere

Senior member
Jan 17, 2013
712
701
136
You cannot run Crysis 3 at a playable frame rate on intel HD 4000 with an i7-3770k...on an AMD A10-5800k or A10-5750M you can A10-5800k integrated graphics can get 30-40 FPS in crysis 3 at medium settings...show me one intel that will do that with onboard GPU.

AMD can get 30-40fps in crysis 3 medium settings at 1080p? Proof please.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Of course not, it's not designed to...the A10-5750M, however, is designed to compete in that segment...don't compare products disproportionately aligned...simply because you can. I like AMD, but I don't compare the FX8350 to the 3930k...it's not in the same arena. The Centurion on the other hand...is in that arena...remarkably...it's comparably priced well below the i7-3960x



Who told you that HD4000 or even HD 4600 is better than an AMD APU? Whoever that was...they lied to you. Compare frame rate benchmarks from popular games. You cannot run Crysis 3 at a playable frame rate on intel HD 4000 with an i7-3770k...on an AMD A10-5800k or A10-5750M you can A10-5800k integrated graphics can get 30-40 FPS in crysis 3 at medium settings...show me one intel that will do that with onboard GPU.





Open source benchmarks are extremely useful...

Want to see some ICC compiled benchmarks?

Here are a few of the REALLY popular examples:
itunes
Cinebench R10
Cinebench R11.5

That's just a few...ICC isn't widespread in the windows world except for synthetic benchmarks. That's why the synthetic benchmarks for those in the gaming industry mean nothing. The attitude among developers is essentially..."Oh, look another intel...they didn't give us what we wanted on the chip again...*sigh* look at what AMD gave us, though!"

Do you think it's a coincidence that xbox720 and ps4 are on AMD hardware? I can tell you it's not...it's because AMD listens when people tell them what they want. Intel says, "Oh, sure we can do that...by the way...this is what this one has...we couldn't get around to the coding features you wanted for better efficiency...the architecture we developed is 7% better at cinebench though!"

Nobody realizes intel is the synthetic benchmark king, but it's smoke and mirrors...real world performance is drastically closer.

Also, for whoever said it...

AMD does support their own compiler...open64(open source). They issue AMD specifically optimized updates for it with each new round of new architecture. It's just not a high penetration compiler in the windows world...it doesn't cripple intel either by the way, but it does generate the most optimal code path available to the CPU, period. Which honestly levels the playing field. Ever wonder why ubuntu linux benchmarks of the 2 systems show increases in speed for both CPUs, and tend to be drastically closer? AMD gets a much bigger speed increase though if you notice...because linux is far more streamlined as an OS, and because it's not specifically optimized for one set of architecture. Also, the open64 compiler with AMD optimizations is as fast for intel as ICC, but it's just as fast for AMD, how do you like those apples?

The a10 loses to an i5 ULV. Its on par with an i3.

The centurion at its price is perfectly comparable to the 3930k. That was my comparison. You do realize that the 3930x is about 5% slower than the 3960x and costs about half the price. 3930k with a 4.4 ghz overclock will pretty much kill the centurion.

I structured my post wrong (because i added in something and didn't read it over).

Should have been.

I meant cpu wise. Not gpu wise. The a10-4600m competes with i3 SV or i5 ULV in cpu performance.. Its not even close to i7 quad performance.

a10-5750m hasn't been released yet.

And no offense but itunes is hardly a benchmark. Its a real world test used by many many people. .

The a10 is not going to get 30-40 fps on crysis 3 unless it somehow manages to double performance over the a10-4600m (and even amd only says 20% increase for the flagship mobile chip--and thats amd, not independant reviewers).

http://www.amd.com/us/press-releases/Pages/amd_unveils_new_apus.aspx

bf48b21558.jpg



"Oh, sure we can do that...by the way...this is what this one has...we couldn't get around to the coding features you wanted for better efficiency...the architecture we developed is 7% better at cinebench though!"

Have you seen the relationship between amd mobile apu gaming performance and 3d mark scores? How about hybrid crossfire scores? Its hardly accurate.