[allthingsd.com] AMD getting ready for another round of Layoffs

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BenchPress

Senior member
Nov 8, 2011
392
0
0
Rory Read said:
The strategy we laid out at the beginning of the year is sound – building our differentiated IP through ambidextrous architectures, Heterogeneous Systems Architecture (HSA)...
So the "strategy" is to continue to ignore Intel's massive advances in developer-friendly homogeneous high-throughput computing, and instead keep doing something that isn't generating any cash?

AMD needs to build a strong CPU architecture that doesn't need clumsy help from a GPGPU, and they need to build a GPU that focuses on graphics only. Anything else requires compromises they simply cannot afford to make. Intel would have been all over GPGPU by now if it had any merits, and even NVIDIA has crippled the compute capabilities of consumer chips in favor of graphics. Rory should take a few lessons from that.

This isn't even about making the right business decisions. This is quantifiable science. GPUs have lots of compute cores running lots of threads, which is fine for graphics, but each thread is executed too slowly for anything else that's less parallel (which is everything). This is why executives need to understand technology.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,855
4,831
136
Intel would have been all over GPGPU by now if it had any merits,

Merits according to Intel s requirements , that is , a proprietary thingy
over wich they would have total control.

Anything else , even if it improve the user experience , will
be discarded if not battled.
 

NTMBK

Lifer
Nov 14, 2011
10,423
5,728
136
Intel would have been all over GPGPU by now if it had any merits

Seriously? Seriously?

I write embarassingly parallel code for a living. On the one hand we have our CPU code running on dual socket, 8 core Sandy Bridge Xeons, and on the other we have our GPU code running on a single GTX 480. Guess which one performs best, by a very, very large margin. It's not the Xeons.

And guess which one of those has a codebase full of hand-coded intrinsics and segments of inline assembly, completely wrecking readability. It's not the 480.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
Intel would have been all over GPGPU by now if it had any merits, and even NVIDIA has crippled the compute capabilities of consumer chips in favor of graphics

What was Larrabee? What's MIC, then? Not a GPU but a co-processor? correct me if I'm wrong, but isn't that the same damn thing?

As for GPGPU not being worthy, I'd suggest you fire up Sony Vegas, Maya or Blender and then tell me what you think.

As for AMD, if they're able to streamline their product lines they could potentially see their way through this. Currently they've got their server/desktop CPUs, their laptop/HTPC APUs and their Bobcat sub 17W lines. That's 3 entirely different CPUs, and then you get to their GPUs. Something has to go.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Intel would have been all over GPGPU by now if it had any merits

Seriously? Seriously?

I write embarassingly parallel code for a living. On the one hand we have our CPU code running on dual socket, 8 core Sandy Bridge Xeons, and on the other we have our GPU code running on a single GTX 480. Guess which one performs best, by a very, very large margin. It's not the Xeons.

And guess which one of those has a codebase full of hand-coded intrinsics and segments of inline assembly, completely wrecking readability. It's not the 480.

The take home message is look at the gross margins for those products.

Sure Intel may recognize that better performance would come from GPGPU, but can they sell GPGPU products that command 60% gross margins like their SB Xeons?

IMO BenchPress is making the error of conflating market viability and relevance to the end-user with that of gross margin viability and sustainability to seller (Intel in this case).

No manufacturer willingly dives into a commoditized market segment with sub-60% gross margins. Intel will avoid going there until they no longer have the option of safely ignoring it (just as they have done with ARM and the mobile handset until late).

However, the price/performance for the customer is without question in favor of the consumer when it comes to GPGPU and embarrassingly parallelizable applications. And so long as there is competition then there will be some business out there looking to maximize the customer's opportunity to walk out that portion of the performance envelope.

grainvsIPC.png


CUDAJIT.jpg


Just don't expect Intel to lead the pack unless Intel's executive management can convince Intel's BoD that there is gold in them thar hills and it can be mined to the tune of 60% gross-margins in sustainable fashion.
 
Last edited:

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
Sure Intel may recognize that better performance would come from GPGPU, but can they sell GPGPU products that command 60% gross margins like their SB Xeons?

Isn't that exactly what MIC is? Essentially a way to sell the same product, wrap it up in x86 and command the same profit margin.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Isn't that exactly what MIC is? Essentially a way to sell the same product, wrap it up in x86 and command the same profit margin.

MIC is a backup plan incase Nvidia actually makes more than superficial inroads into the HPC market.

If Nvidia starts to accumulate the kinds of marketshare that Intel sees as jeopardizing its ability to command 60% gross margins with traditional fat-core Xeons then you will see Xeon Phi become more aggressively positioned.

What makes all this work for Intel is their access to process node technology that Nvidia does not have access to. That will keep Intel one step ahead if and when it needs to shift gears to make Xeon Phi their dominant product in HPC versus keeping it alive but in the background as plan B.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
MIC is a backup plan incase Nvidia actually makes more than superficial inroads into the HPC market.

If Nvidia starts to accumulate the kinds of marketshare that Intel sees as jeopardizing its ability to command 60% gross margins with traditional fat-core Xeons then you will see Xeon Phi become more aggressively positioned.

What makes all this work for Intel is their access to process node technology that Nvidia does not have access to. That will keep Intel one step ahead if and when it needs to shift gears to make Xeon Phi their dominant product in HPC versus keeping it alive but in the background as plan B.

I agree with the above analysis.
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
Seriously? Seriously?

I write embarassingly parallel code for a living. On the one hand we have our CPU code running on dual socket, 8 core Sandy Bridge Xeons, and on the other we have our GPU code running on a single GTX 480. Guess which one performs best, by a very, very large margin. It's not the Xeons.
Yes, seriously. Note that I was talking about the consumer market. GPGPU makes plenty of sense in the HPC market, where general purpose workloads are just as embarrassingly parallel as graphics is in a game. But after all these years I still have to see a single serious non-graphics GPGPU application for consumers. Any need for higher throughput in consumer applications has been covered by multi-core and wider vector units within the CPU, not the GPU. In particular AVX2 will kill any remaining incentive an application developer might have to try to use the GPU, by adding a lot of the same technology to the CPU. So the ROI for using AVX2 is just far greater.

The problem is that AMD tries to sell HSA to everyone and his dog. But they end up crippling the CPU, crippling the GPU, and they don't have a decent HPC product.
And guess which one of those has a codebase full of hand-coded intrinsics and segments of inline assembly, completely wrecking readability. It's not the 480.
The reason for that is because with every SIMD extension before AVX2, there was a sore lack of support for parallel versions of some major scalar instruction. With AVX2's gather support and vector-vector shifts, the compiler can much more easily auto-vectorize loops. Any code targeted at the GPU will run very efficiently on a CPU with AVX2 support, without requiring inline assembly.
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
What was Larrabee? What's MIC, then? Not a GPU but a co-processor? correct me if I'm wrong, but isn't that the same damn thing?
That's for the HPC market, not the consumer market. Larrabee failed as a GPU for the consumer market, and was revived successfully as an HPC product.
As for GPGPU not being worthy, I'd suggest you fire up Sony Vegas, Maya or Blender and then tell me what you think.
There's nothing GPGPU about those. They're graphics, graphics, and more graphics. So no need for a GPU that is optimized for General Purpose workloads.

In other words, these applications haven't made AMD's products any more attractive. Hence the effort they're putting into HSA isn't generating any revenue. It's a waste. Any other need for higher computing power for slightly less parallel workloads can be delivered by AVX2 and multi-core (supported by TSX technology).
As for AMD, if they're able to streamline their product lines they could potentially see their way through this. Currently they've got their server/desktop CPUs, their laptop/HTPC APUs and their Bobcat sub 17W lines. That's 3 entirely different CPUs, and then you get to their GPUs. Something has to go.
If you let a fat guy lose weight by starving him, he's not going to be fit for a marathon once he's at the right weight.

What I'm trying to say is that "streamlining" by itself doesn't create good products. It might make a company healthier, but less relevant as a whole. They won't "see their way through this" as a respectable CPU/GPU company unless they start making the right design choices. GPGPU is a dead end for the consumer market.
 

Abwx

Lifer
Apr 2, 2011
11,855
4,831
136
The problem is that AMD tries to sell HSA to everyone and his dog.

Still , that s an open standard.....

With AVX2's gather support and vector-vector shifts, the compiler can much more easily auto-vectorize loops. Any code targeted at the GPU will run very efficiently on a CPU with AVX2 support, without requiring inline assembly.

.........while you re trying to convince everybody
and their dogs to buy Intel s proprietary standard using their in house
optimized compiler.....:rolleyes:
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
MIC is a backup plan incase Nvidia actually makes more than superficial inroads into the HPC market.

If Nvidia starts to accumulate the kinds of marketshare that Intel sees as jeopardizing its ability to command 60% gross margins with traditional fat-core Xeons then you will see Xeon Phi become more aggressively positioned.

What makes all this work for Intel is their access to process node technology that Nvidia does not have access to. That will keep Intel one step ahead if and when it needs to shift gears to make Xeon Phi their dominant product in HPC versus keeping it alive but in the background as plan B.

Co-Processing is the only to get better and/or cheaper Supercomputers.

And Intel's foundry will not help them. It's the design of the processor which will characterize the power consumption.
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
Still , that s an open standard.....
That's irrelevant when the ones calling it a standard are the ones defining and implementing it. HSA is a new specification only to satisfy AMD's needs. Meanwhile hsafoundation.com/standards is still barren.

AVX2 on the other hand can execute auto-vectorized code from any programming language. That's lots and lots of existing open standards, defined by independent commitees and implemented by developers for developers.
.........while you re trying to convince everybody
and their dogs to buy Intel s proprietary standard using their in house
optimized compiler.....:rolleyes:
I'm not trying to convince anyone. It will sell itself. Every major compiler has already added AVX2 support. Auto-vectorization is being worked on as we speak. And many frameworks with AVX2 support will be available free of charge on the day of Haswell's launch.

My only hope is that AMD gets a wake up call and ditches HSA in favor of homogeneous throughput computing technology. I don't particularly care if that's AVX2 or something superior. AMD obtained a portion of the VEX encoding space for XOP, and they could add many other interesting instructions to bring GPU technology into the CPU.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
There's nothing GPGPU about those. They're graphics, graphics, and more graphics. So no need for a GPU that is optimized for General Purpose workloads.

In other words, these applications haven't made AMD's products any more attractive. Hence the effort they're putting into HSA isn't generating any revenue. It's a waste. Any other need for higher computing power for slightly less parallel workloads can be delivered by AVX2 and multi-core (supported by TSX technology).

I suppose you've taken dancing classes? You're quite good at dodging the point.

Compare GPGPU's benefits in those workstation programs with the potential benefits of AVX2. Will AVX2 replace them in the near future? No... not a chance.

Secondly, I actually give HSA and GPGPU more chance of surviving than I do Intel and AMD both. People aren't buying Intel products in droves and sales figures are actually slipping. AVX2 has benefits for certain crowds, but the whole of the market doesn't need more CPU throughput. There's a reason why PC sales are slipping and it's not to do with PCs not being fast enough. Do you really think the same people buying tablets care about AVX2? A proprietary ISA attached to an overpriced processor tied to high power consumption and large form factors? Get real.

As far as streamlining their product line, I think that's the only way AMD stands a chance of survival. Frankly, I think they're not going to survive 2013 if the current rumors are true -- delayed Kaveri and the 30% of their engineers on the chopping block.
 
Last edited:

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Intel s proprietary standard using their in house
optimized compiler.....:rolleyes:

You really need to get off this Intel compiler thing. That's a really old, old argument that doesn't hold water anymore.

Did you know that Intel's current compiler generates code better optimized for AMD processors than both GCC and Microsoft?
 

Abwx

Lifer
Apr 2, 2011
11,855
4,831
136
You really need to get off this Intel compiler thing. That's a really old, old argument that doesn't hold water anymore.

Specialists do not agree with you...

It's not getting better. The latest version of Intel's SVML (small vector math library) has some functions that can only be called from processors with AVX because the input parameter is an AVX vector (YMM register). There is no logical reason why these functions should have a CPU dispatcher, yet they have two different code paths for the same instruction set: An optimized version for Intel processors with AVX and an inferior version for other brands of CPU with AVX.
http://www.agner.org/optimize/blog/read.php?i=49#214

Did you know that Intel's current compiler generates code better optimized for AMD processors than both GCC and Microsoft?

That doesnt remove the fact that they did implement
a Cpu throttling for anything that is not Intel...
 
Last edited:

BenchPress

Senior member
Nov 8, 2011
392
0
0
I suppose you've taken dancing classes? You're quite good at dodging the point.
Leave my dancing classes out of this. I don't know what you think "the point" is, but to me it's AMD's declining profits and how HSA isn't helping them.
Compare GPGPU's benefits in those workstation programs with the potential benefits of AVX2. Will AVX2 replace them in the near future? No... not a chance.
Again, those non-GPGPU workstation applications don't affect AMD's bottom line in any noticeable way.

Meanwhile AVX2 will more than double the throughput computing power for the majority of the market. Furthermore, it will take very minimal developer effort to tap into that, contrary to GPGPU. And four cores with AVX2 are actually more powerful than a GT2, so don't overestimate the benefits of GPGPU by looking at huge GPUs. AVX2 will undoubtedly have a far greater effect on consumer choice than GPGPU. When Haswell launches, we'll probably see lots of multimedia benchmarks where Intel is over two times faster than AMD. Developers won't adopt HSA just to do AMD a favor. It's a lot of effort, with practically no return.

Note also that GPUs unpack everything to 32-bit types, while AVX2 has many instructions which can operate on packed 16-bit and 8-bit types. This is particularly interesting for photo and video editing. Hence even if the CPU and GPU have the same GFLOPS rating, which is typically for 32-bit floating-point numbers, the CPU can outperform it.

So I'm afraid you're vastly overrating the benefit of GPGPU. It's not magically faster than what CPUs can do. And AVX has future potential to be extended up to 1024-bit!
Secondly, I actually give HSA and GPGPU more chance of surviving than I do Intel and AMD both. People aren't buying Intel products in droves and sales figures are actually slipping. AVX2 has benefits for certain crowds, but the whole of the market doesn't need more CPU throughput.
That's ridiculous. GPGPU is about trying to increase throughput, but when the CPU's throughput is increased that's suddenly not needed? It seems like you're using double standards.
There's a reason why PC sales are slipping and it's not to do with PCs not being fast enough.
There's an economic crisis, people are waiting for Windows 8, and games aren't pushing the limits because the new consoles haven't arrived yet. Each of these will change over time. Don't mistake it for a general decline.
Do you really think the same people buying tablets care about AVX2? A proprietary ISA attached to an overpriced processor tied to high power consumption and large form factors? Get real.
Again, it doesn't specifically have to be AVX2. All I'm saying is that the heterogeneous computing which AMD is pursuing is a dead end for the consumer market and they should instead be looking into homogeneous high throughput technology. And yes, something akin to AVX2 can be very desirable for tablets too. Having a wide SIMD instruction set with gather support allows to vectorize any loop with independent iterations, which are a bottleneck in lots of software. Vectorization lowers power consumption, so it's something that should interest the mobile market. There will be 10 Watt Haswell parts with full AVX2 support, and the next generation will no doubt be even more power efficient and be suitable for an even wider market.
As far as streamlining their product line, I think that's the only way AMD stands a chance of survival. Frankly, I think they're not going to survive 2013 if the current rumors are true -- delayed Kaveri and the 30% of their engineers on the chopping block.
They would have a chance of survival if they stopped wasting money on HSA. Unfortunately Rory's memo seems to make that unlikely.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
Again, it doesn't specifically have to be AVX2. All I'm saying is that the heterogeneous computing which AMD is pursuing is a dead end for the consumer market and they should instead be looking into homogeneous high throughput technology. And yes, something akin to AVX2 can be very desirable for tablets too. Having a wide SIMD instruction set with gather support allows to vectorize any loop with independent iterations, which are a bottleneck in lots of software. Vectorization lowers power consumption, so it's something that should interest the mobile market. There will be 10 Watt Haswell parts with full AVX2 support, and the next generation will no doubt be even more power efficient and be suitable for an even wider market.

GPGPU makes less sense in typical desktop workloads, outside a few select applications like video decoding and certain gaming GPU-accelerated tasks. On the desktop, people haven't been updating their hardware every generation because they lack the throughput. AVX2 isn't going to change that.

HSA and GPGPU makes much more sense in an ecosystem that's thriving, outselling x86 and could use more compute power to make headway into other markets. There's a reason why ARM, Samsung, Qualcomm, TI and even Apple are adapting openCL. None of them will ever have access to x86 and AVX2 unless they're buying Intel chips, and that's looking less likely every year.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Specialists do not agree with you...

Awesome, according to a blogger, Intel is supposed to degrade their performance because AMD doesn't have a register that's physically required for the Intel code path.

I guess that would mean that AMD's AVX implementation isn't fully compatible with Intel's. And that's somehow Intel's fault.

Wasn't there a court order or agreement of some sort that Intel wouldn't degrade AMD's performance with their compiler? Why isn't AMD making an issue about this? Most likely because there isn't an issue.

As always, AMD's problems are never of their own creation (not that this is one)...It's always Intel's fault.
 
Last edited:

BenchPress

Senior member
Nov 8, 2011
392
0
0
That doesnt remove the fact that they did implement
a Cpu throttling for anything that is not Intel...
Boo-freaking-hoo. We're talking about a few percent here. What will be AMD's excuse when AVX2 routines are twice as fast or more and they simply don't have hardware that supports it?

The fact that AVX2 is proprietary is irrelevant. They have a cross-licencing agreement, and antitrust rules also enable AMD to achieve ISA compatibility with Intel. Other ISAs are also free to widen their SIMD vectors and add GPU-like technology such as gather and FMA.

So there's no reason to go with heterogeneous computing over homogeneous computing just because the former claims to be more open. For all practical purposes, there's no fundamental difference in open-ness or standard-ness.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
thread derailed by benchpress and his AVX2 hype

*sight*

How many times is this, anyway? I'm surprised he's even still here...

The fact that AVX2 is proprietary is irrelevant. They have a cross-licencing agreement, and antitrust rules also enable AMD to achieve ISA compatibility with Intel. Other ISAs are also free to widen their SIMD vectors and add GPU-like technology such as gather and FMA.

With AMD, yes, with everyone else? No. AMD is irrelevant, if you haven't been keeping up with this thread. As soon as AMD disappears, and I'm betting that'll be 2013 or 2014, AVX2 is dead in the water.

So there's no reason to go with heterogeneous computing over homogeneous computing just because the former claims to be more open. For all practical purposes, there's no fundamental difference in open-ness or standard-ness.

Yes there is. The reasons are Qualcomm, Samsung, Apple, TI, Amazon, etc...

Have those guys been hurting because they lack x86 and x87? I don't think so.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Wasn't there a court order or agreement of some sort that Intel wouldn't degrade AMD's performance with their compiler? Why isn't AMD making an issue about this? Most likely because there isn't an issue.


That's what I was thinking too. You know AMD would be right back to hitting Intel up for another cool billion if there was even a hint of shenanigans going on.

As always, AMD's problems are never of their own creation, are they?...It's always Intel's fault.

Ironically AMD has no problems owning up to their shortcomings. I guess more memo's are in order to get everyone on the same page :D
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Ironically AMD has no problems owning up to their shortcomings. I guess more memo's are in order to get everyone on the same page :D

Yes, but they should forget about being ambidextrous, that's so 20th century. The next memo should outline their direction for being multidextrous.

(Wow, I didn't even think multidextrous was a word, but Chrome didn't flag it as misspelled.)