• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

Why does AMD suck?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
This thread should be deleted.

It should, but as we've seen time and time again, trolling AMD is allowed to thrive and even encouraged on this forum. Just look, there's even a moderator participating in the thread.

It's hard to tell this forum, and this apart.

Even funnier, is moderators hand out bans for spelling intel as inetl, yet allow this kind of flamebaiting or even participate.
 
Aug 11, 2008
10,451
642
126
It should, but as we've seen time and time again, trolling AMD is allowed to thrive and even encouraged on this forum. Just look, there's even a moderator participating in the thread.

It's hard to tell this forum, and this apart.

Even funnier, is moderators hand out bans for spelling intel as inetl, yet allow this kind of flamebaiting or even participate.

I agree this thread should be omitted. In response to what you said though, there is a lot of pro-amd sentiment, including some very iffy recommendations that are allowed on these forums, so I definitely dont think there is any anti-AMD bias from the mods.
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
Why is the overall performance of BD, Vishera and all AMD's processors so low because of the IPC performance. The whole modules vs. core things shouldn't matter, because a lot of benchmarks in use tend to favor integer operations than floating point ones...Anandtech made this a point to test integer and floating point performance.
It's a bit more complicate than that. There are integer scalar operations, floating-point scalar operations, integer vector operations, and floating-point vector operations. Only the integer scalar operations get their own execution core in the Bulldozer architecture. Everything else has to share one set of execution units (called 'Flex FP' by AMD) per module.

So only one out of four classes of arithmetic instructions, an important one though, potentially benefits from the Bulldozer architecture. Unfortunately, they've cut back the number of execution units per integer scalar core too, and they share a decoder set. The latter issue will be fixed with Steamroller, along with various other substantial improvements. Intel's Haswell will greatly outclass it though, by adding even more integer scalar execution units, and doubling the vector processing throughput.
Does AMD need better engineers? How can they still excel at GPUs, but suck at CPUs?
They neither particularly excel at GPUs, nor do they really suck at CPUs. The engineers built exactly what they were told to build: more cores for the CPU and general-purpose computing capabilities for the GPU. Someone in higher management misjudged the adoption of multi-threaded software development, and someone misjudged the ability to make the GPU suitable for general purpose-computing without compromises. Multi-threading is very hard for developers, so they'll use as few threads as possible. AMD's decision to have more cores but make them a little weaker did not result in a greater sum in practice. Haswell on the other hand adds TSX technology to greatly facilitate multi-threaded development, and it's optimized to run two threads per core. AMD's idea of using the GPU for non-graphical tasks is making things worse for them too. They're not advancing the CPU's vector processing, which is much easier to tap into for most applications, and the GPU becomes less streamlined for graphics.

The engineers could fix all this. But unfortunately AMD's leadership still wants to stick with derivatives of Bulldozer, and wants to continue crippling graphics performance.
 

dma0991

Platinum Member
Mar 17, 2011
2,723
1
0
Anchorman-well-that-escalated-quickly.jpg


I can't foresee a good ending to this thread.
 
Feb 19, 2009
10,457
10
76
20120905pcICinsightsChipR&D519.jpg


Look at the 3rd column from the right - 2011 R&D ($M)

Notice what Intel's R&D budget is like. $8.4B

Now look way down the list at #9 and see what AMD's R&D budget is like. $1.5B

(and to get some idea as to how much of Intel's R&D budget is going into process node development, just look at TSMC at position 10 on the list)

At least we can see why TSMC suck, investing so few % of sales in R&D..

But damn, Samsung.. what are they doing??
 

Gigantopithecus

Diamond Member
Dec 14, 2004
7,664
0
71
It should, but as we've seen time and time again, trolling AMD is allowed to thrive and even encouraged on this forum. Just look, there's even a moderator participating in the thread.

It's hard to tell this forum, and this apart.

Even funnier, is moderators hand out bans for spelling intel as inetl, yet allow this kind of flamebaiting or even participate.

Finding useful information in CPUs and Overclocking is like panning for nuggets of gold in a sewage runoff stream.
 

alexruiz

Platinum Member
Sep 21, 2001
2,836
556
126
Why is the overall performance of BD, Vishera and all AMD's processors so low because of the IPC performance. The whole modules vs. core things shouldn't matter, because a lot of benchmarks in use tend to favor integer operations than floating point ones...Anandtech made this a point to test integer and floating point performance.

What's the flaw here? How can AMD be even more inefficient with their processors? The IPC performance of Bulldozer can be seen as worse at times compared to Phenom II.

Does AMD need better engineers? How can they still excel at GPUs, but suck at CPUs? Honest question. No bashing, baiting or flame throwing. I simply don't understand why once leading, now they are lagging especially when they have such talent on the GPU side.

Assuming you are not trolling, which might be a big stretch, here is my answer:

No, it doesn't suck. They just mis-timed the architecture. If you had a clue about product development cycles, you already knew that it takes years from concept to production.Bulldozer was probably frozen as concept well before Conroe surfaced. If you understand the rationale behind, it is a beautiful concept: Increase integer horsepower, decrease floating point horsepower as those loads can be offloaded to the GPU, specially after the ATI acqusition (fusion concept) Furthermore, more cores can make finish the workload faster. But it needs the software to be aware of the capabilities.

Unfortunately, while the engineering department delivered, the developer relationships team didn't. By the time BD arrived, fusion wasn't cohesive enough as Open CL was just gaining traction, and many high profile applications were still poorly threaded. The software to make BD shine wasn't there. Couple that with a very strong CPU from the competition, and the weak spots of the new concept were exacerbated. But the software is getting there. More and more applications are getting better threaded, or better yet, using Open CL.

Your mentality, like that of many of the blind fans from the blue team, is wrong: "It sucks, IPC per core is lower" Yes, it is, but how about "IPC per whole CPU"? Is it? Not so clear. If you want to be a luddite and live in the past, your choice, keep using your poorly threaded apps because your CPU has very high IPC per core. Some of us will use whatever that uses the hardware at it full potential. Let me give you a very simple example:

If you have read the reviews, the blue CPUs are faster in photo edition using the software from the big "A" corporation. You would hence think they are better for photo edition. Assume you just got a dSLR and will be working with RAW files, and based on those reviews you think the blue CPUs are faster. But if you analyze the numbers you realize the "A" software is poorly optimized. Their RAW apps, ACR and lightroom are similar. Why use them if you have better alternatives? How about Corel Aftershot pro? OpenCL support and proper multicore support sound much better. Does it deliver? Oh yes, it does. Which CPU is faster there? A simple google search will tell who wins there, and better yet, an FX in Aftershot pro will do it much faster than an i5 in the "A" apps. Even if you are a blue cpu user, aftershot pro will be faster than the "A" apps simply because it harnesses the potential of the CPU better.

"but, but, all the reviewers use the "A" app for benchmark!" Just because they do doesn't mean you have to do it also. Most of the review websites are followers, not leaders. If one uses an app, the rest follow suit even if the perspective is incomplete. Go back in time, 2001 timeframe to be precise. An influential website used a program called tmpgenc to measure video encoding (AVI to MPEG2) and then, everyone else started using it also. This program happened to be very well optimized for SSE2. The P4 creamed the K7. Everyone claimed the P4 was better for video encoding. However, the real users of MPEG2 knew quite well that the quality of tmpgenc was mediocre, and that even with its SSE2 optimizations, it was much slower than CCE (Cinema craft Encoder) which happened to run faster on a K7. REAL user kept using CCE, while all the wannabes went with tmpgenc and kept spreading the false premise that the P4 was faster for MPEG2 encoding.

Fast forward to today, and the situation has stayed the same. If one influential site uses a program, the sheep follow. If you have a multicore CPU, regardless of brand, why handicap it? Get the software that leverages its power the best. I am not going to reward the incompetence of some software companies that fail to use my hardware at its fullest, even if they are the "standard". I have stated before that benchmark numbers are like car loan applications, you better read the fine print and make sense of the numbers, understanding why they are what they are. f you are OK bragging about your "monster IPC", fine, I will just borrow a quote from a fellow ATer (who happens to use Intel btw) "Those of us who care about productivity, use the right tool..." And it happens that the right tool can mean different software

ps. Hopefully some reviewers get the hint, and start diversifying the test suite. The Corel portfolio really deserves to be included, as they have been pushing proper multithreading and Open CL support. As a user, even if you have an i5 and the FX happens to run faster in those, your i5 will still be faster than using the other program. Ignore the bragging right, enjoy the added productivity.

pss. Wonder if we would be having these discussion had the developer relationships team at AMD had done a better job helping optimize software.
 
Feb 19, 2009
10,457
10
76
Nice post Alexruiz, very well said. For productivity, i can see where FX series are worthwhile..

But most of us here are gamers, and this is undeniable, FX sucks since most games are low threaded.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Nice post Alexruiz, very well said. For productivity, i can see where FX series are worthwhile..

But most of us here are gamers, and this is undeniable, FX sucks since most games are low threaded.

Interestingly enough, it's most attractive when you want inefficient, productivity work done. If you want games, IPC-dependent, or efficient workloads, Intel is the answer. If you want inefficient productivity, AMD comes to mind. :)
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Interestingly enough, it's most attractive when you want inefficient, productivity work done. If you want games, IPC-dependent, or efficient workloads, Intel is the answer. If you want inefficient productivity, AMD comes to mind. :)

Well there ya go that sums it up vary well. Lock topic
 

teh_pwnerer

Member
Oct 24, 2012
151
0
0
AMD 64 sucked? Nope.

AMD X2 sucked? Nope.

AMD Phenom II X4 and X6 sucked? Nope.

All those CPUs destroyed what Intel had to offer at the time.

AMD FTW. :thumbsup:
 

alexruiz

Platinum Member
Sep 21, 2001
2,836
556
126
Interestingly enough, it's most attractive when you want inefficient, productivity work done. If you want games, IPC-dependent, or efficient workloads, Intel is the answer. If you want inefficient productivity, AMD comes to mind. :)

You might have preferences, but don't let those cloud the judgment and say false things. You also don't have to justify all the money you blew on your setup. But now that you are on the hook, What do you define as "efficient"? How about you list "inefficient" apps? Inefficient are those that cannot leverage the hardware at its fullest. With your 12 thread CPU, you should be one of the users leading the charge to ask for proper hardware usage, not the other way around. But as I said, if you like wasting your hardware potential, great for you.

Efficient games:
BF3 (Frostbite 2, scales with core)
MoHWF (Same as above)
Metro2033
AvP

Inefficient games:
SC2 (2 cores, seriously)
Skyrim (a HD7870 embarrasing the GTX680, a card twice its price? Plainly wrong, and this is coming from someone who owns 2 HD7870s)

Show me "efficient" workloads, I am curious.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
You might have preferences, but don't let those cloud the judgment and say false things. You also don't have to justify all the money you blew on your setup. But now that you are on the hook, What do you define as "efficient"? How about you list "inefficient" apps? Inefficient are those that cannot leverage the hardware at its fullest. With your 12 thread CPU, you should be one of the users leading the charge to ask for proper hardware usage, not the other way around. But as I said, if you like wasting your hardware potential, great for you.

Efficient games:
BF3 (Frostbite 2, scales with core)
MoHWF (Same as above)
Metro2033
AvP

Inefficient games:
SC2 (2 cores, seriously)
Skyrim (a HD7870 embarrasing the GTX680, a card twice its price? Plainly wrong, and this is coming from someone who owns 2 HD7870s)

Show me "efficient" workloads, I am curious.

BD may complete a MT task 5-10% faster than a 3770K, but it does so with MUCH higher power. Thus inefficient. Some care, some don't. That's why it will not be a big player in the server market where power and cooling are huge costs.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
AMD 64 sucked? Nope.

AMD X2 sucked? Nope.

AMD Phenom II X4 and X6 sucked? Nope.

All those CPUs destroyed what Intel had to offer at the time.

AMD FTW. :thumbsup:

Uh no. Where have you been?

The first two were right (they were leaders at that time). Nothing called 'Phenon...' ever destroyed the competition. The X6 was a good MT, but fell flat in many areas as well.
 

alexruiz

Platinum Member
Sep 21, 2001
2,836
556
126
BD may complete a MT task 5-10% faster than a 3770K, but it does so with MUCH higher power. Thus inefficient. Some care, some don't. That's why it will not be a big player in the server market where power and cooling are huge costs.

So you are backtracking? You are already admitting that BD / PD is faster in properly optimized programs that can use all the cores. Isn't that what I defined as "efficient", leveraging the hardware at its fullest?

I didn't ask for "power efficient", meaning lower energy consumption per task. Yes, PD / BD trails in there, but you are mixing 2 concepts.
 

videogames101

Diamond Member
Aug 24, 2005
6,783
27
91
Well Amd New 8350 does not suck.

It is however only really suited for a small part of the market with Heavy multi threaded tasks. Price per performance wise.

Where Intel has i7s which multi thread better and dominate in gaming but cost more.

If a person is only Encoding a 8350 at 229$ is a great buy.

Amd has lost alot of money And does not have the R&D like intel to compete at the high level..

This is not good for afew reasons.

Intel can up there prices

Intel can get lazy with there fab and release products slow.

If Amd can not compete with Intel it gives into no real reasons to make Advances which is a loss to everybody.

Yeah, their release schedule has really... slowed down? :hmm:

oh wait, it hasn't

AMD 64 sucked? Nope.

AMD X2 sucked? Nope.

AMD Phenom II X4 and X6 sucked? Nope.

All those CPUs destroyed what Intel had to offer at the time.

AMD FTW. :thumbsup:

Athlon 64 did beat intels offerings, but the conroe, nehalem, and sandy bridge lines pretty much crushed all the phenom/phenom II releases IMO

...and I have no reason to think haswell vs. steamroller will be any different. :(
 
Last edited:

teh_pwnerer

Member
Oct 24, 2012
151
0
0
Uh no. Where have you been?

The first two were right (they were leaders at that time). Nothing called 'Phenon...' ever destroyed the competition. The X6 was a good MT, but fell flat in many areas as well.
AMD 64 and AMD X2 destroyed anything Intel had during that period. Intel was sucking big time for a while. :D

AMD Phenom II X4 and X6 may not be as strong in single threaded appliations or games, but for the price they were worth it! Phenom II 965 beats C2Q Q9650 on pretty much everything. Only after that did Intel get ahead!
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Obviously, AMD doesn't have the R&D muscle to push their CPU architecture, and a large part of it goes to GPU development in an already capital constrained company. The fact that AMD is still even involved with CPUs after the ATi acquisition is a bit of a miracle, but to say AMD flat out sucks isn't fair at all. We can still go back, analyze and ponder at what could've been scenarios, but the AMD we have now is what matters.

Luckily with Fusion, AMD has a very competitive piece of hardware versus Intel for certain tasks and in terms of value to the consumer. However, the graphics side of things isn't as important as it could be in everyday computing. Intel's superior x86 tech and decent enough graphics are relatively good enough for the average consumer. Don't get me wrong, Fusion is too, but it's more likely the average Joe will need the extra CPU muscle than the graphics muscle, so AMD really needs to focus on making the graphics array and GPGPU a more important part of the ecosystem in driving the mass consumption media experience.

I really do hope that Fusion made it into one or even both of the next gen consoles, because not only is it a major financial win for AMD, but we could see positive repercussions felt in the PC space and vice versa thanks to easily cross-platformed software that benefits multiple markets.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
You are already admitting that BD / PD is faster in properly optimized programs that can use all the cores.

Are you a software developer? Because the idea that any workload that isn't perfectly balanced between 8 cores is poorly optimized is at least mildly insulting to anyone with broad experience in software development. It's not that the software world didn't get the memo AMD sent out. It's that a lot of problems don't lend themselves to super high thread-level parallelization.

So the real story isn't that AMD does well in highly optimized programs, but it does well in programs with highly regular embarrassingly parallel workloads that have been sufficiently parallelized. People seem to think the problem is teaching programmers how to use more than one thread - that's not the case anymore, and today decent threading is why 4C/8T is a good value proposition. The problem is breaking everything down into either 8+ well balanced threads or far greater than 8 burst threads to saturate 8 cores.

AMD has offered more cores/$ than Intel for a long time now. It didn't start with Bulldozer; AMD would sell X4 processors against 2C/4T Intel processors, X4 at lower TDPs on laptops, and X6 processors for much less than Intel's enthusiast processors. It also isn't just AMD doing this. VIA does it with quad core Nano in TDPs where no one else is selling quad cores. Sun did it with Niagara years ago. Quad core Cortex-A9s are doing it now.

Generalize things a bit and you'll see a similar history with products like Itanium, and to some degree Pentium 4 - CPUs that promise very high speeds for software done "the right way", with the naive (and wrong) expectation that software doesn't need to do the things it's bad at doing. Only with AMD I don't actually agree that they misread the future of software development, I think they knew they were going after the market that Intel was specifically less interested in targeting. That, and things scale a lot more easily in servers since they deal with a ton of largely independent tasks - Intel obviously realizes this too, hence why you get the most cores in Xeon sockets.

AMD does add some novelty with CMT, which balances the scaling to give you more cores per area, but that isn't giving them their market in and of itself. The main advantage is that Intel is simply not interested in increasing core counts as aggressively as AMD is. Intel could have released a 6 or even 8 core Ivy Bridge first thing, and for the same prices as their current high end i7s. Of course, their margins would go down, but probably not nearly to the level AMD makes on Vishera.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Phenom II 965 beats C2Q Q9650 on pretty much everything. Only after that did Intel get ahead!
C2Q Q9650 was released August 2008 at a 95W TDP rating.

Phenom II 965 wasn't released till a full year later, August 2009, and even then it came with a 140W TDP rating.

So, what was beating the C2Q 9650 for that full year? (regardless of power consumption)
 

teh_pwnerer

Member
Oct 24, 2012
151
0
0
C2Q Q9650 was released August 2008 at a 95W TDP rating.

Phenom II 965 wasn't released till a full year later, August 2009, and even then it came with a 140W TDP rating.

So, what was beating the C2Q 9650 for that full year? (regardless of power consumption)
He said Conroe crushed Phenom.

Conroe, Kentfield, Yorkfield, etc is Core 2 Quad.
 

inf64

Diamond Member
Mar 11, 2011
3,884
4,692
136
On topic of sucky AMD chips,one poster at XS noticed that 8350 is already sold out at the newegg :). I didn't believe it so I checked it and sure,it's already gone :D. For a sucky CPU it just sold in around 10K units which is supposedly (what the guy at XS says) a number of units Newegg gets.

Or maybe they just got a couple of hundred units and sold them easily...
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
AMD 64 sucked? Nope.

AMD X2 sucked? Nope.

AMD Phenom II X4 and X6 sucked? Nope.

All those CPUs destroyed what Intel had to offer at the time.

AMD FTW. :thumbsup:

At release it sure was a different story, so not sure how you consider this "destroying Intel".
17974.png

17984.png

17985.png

17967.png


And its been so ever since July 2006.
 
Last edited:

cytg111

Lifer
Mar 17, 2008
26,228
15,637
136
R&D budget.

R&D budget and then some ..
Intel is clearly ahead on the process. While being ahead on the process, i am sure they're holding patents on a number of technologies that gets us 'down' there, 22nm and beyond. What that means would be an interresting discussion.
 
Status
Not open for further replies.