WhoBeDaPlaya
Diamond Member
Look on the bright side. At least there wasn't anything that came out very soon after for 1/3 the price and absolutely pwn3d it (Celly 300A @ 450, whereas that P2-400 could barely get to 420 >:| )QX6700, $1500 😱
Look on the bright side. At least there wasn't anything that came out very soon after for 1/3 the price and absolutely pwn3d it (Celly 300A @ 450, whereas that P2-400 could barely get to 420 >:| )QX6700, $1500 😱
So the "strategy" is to continue to ignore Intel's massive advances in developer-friendly homogeneous high-throughput computing, and instead keep doing something that isn't generating any cash?Rory Read said:The strategy we laid out at the beginning of the year is sound building our differentiated IP through ambidextrous architectures, Heterogeneous Systems Architecture (HSA)...
Intel would have been all over GPGPU by now if it had any merits,
Intel would have been all over GPGPU by now if it had any merits
Intel would have been all over GPGPU by now if it had any merits, and even NVIDIA has crippled the compute capabilities of consumer chips in favor of graphics
Intel would have been all over GPGPU by now if it had any merits
Seriously? Seriously?
I write embarassingly parallel code for a living. On the one hand we have our CPU code running on dual socket, 8 core Sandy Bridge Xeons, and on the other we have our GPU code running on a single GTX 480. Guess which one performs best, by a very, very large margin. It's not the Xeons.
And guess which one of those has a codebase full of hand-coded intrinsics and segments of inline assembly, completely wrecking readability. It's not the 480.
Sure Intel may recognize that better performance would come from GPGPU, but can they sell GPGPU products that command 60% gross margins like their SB Xeons?
Isn't that exactly what MIC is? Essentially a way to sell the same product, wrap it up in x86 and command the same profit margin.
MIC is a backup plan incase Nvidia actually makes more than superficial inroads into the HPC market.
If Nvidia starts to accumulate the kinds of marketshare that Intel sees as jeopardizing its ability to command 60% gross margins with traditional fat-core Xeons then you will see Xeon Phi become more aggressively positioned.
What makes all this work for Intel is their access to process node technology that Nvidia does not have access to. That will keep Intel one step ahead if and when it needs to shift gears to make Xeon Phi their dominant product in HPC versus keeping it alive but in the background as plan B.
Yes, seriously. Note that I was talking about the consumer market. GPGPU makes plenty of sense in the HPC market, where general purpose workloads are just as embarrassingly parallel as graphics is in a game. But after all these years I still have to see a single serious non-graphics GPGPU application for consumers. Any need for higher throughput in consumer applications has been covered by multi-core and wider vector units within the CPU, not the GPU. In particular AVX2 will kill any remaining incentive an application developer might have to try to use the GPU, by adding a lot of the same technology to the CPU. So the ROI for using AVX2 is just far greater.Seriously? Seriously?
I write embarassingly parallel code for a living. On the one hand we have our CPU code running on dual socket, 8 core Sandy Bridge Xeons, and on the other we have our GPU code running on a single GTX 480. Guess which one performs best, by a very, very large margin. It's not the Xeons.
The reason for that is because with every SIMD extension before AVX2, there was a sore lack of support for parallel versions of some major scalar instruction. With AVX2's gather support and vector-vector shifts, the compiler can much more easily auto-vectorize loops. Any code targeted at the GPU will run very efficiently on a CPU with AVX2 support, without requiring inline assembly.And guess which one of those has a codebase full of hand-coded intrinsics and segments of inline assembly, completely wrecking readability. It's not the 480.
That's for the HPC market, not the consumer market. Larrabee failed as a GPU for the consumer market, and was revived successfully as an HPC product.What was Larrabee? What's MIC, then? Not a GPU but a co-processor? correct me if I'm wrong, but isn't that the same damn thing?
There's nothing GPGPU about those. They're graphics, graphics, and more graphics. So no need for a GPU that is optimized for General Purpose workloads.As for GPGPU not being worthy, I'd suggest you fire up Sony Vegas, Maya or Blender and then tell me what you think.
If you let a fat guy lose weight by starving him, he's not going to be fit for a marathon once he's at the right weight.As for AMD, if they're able to streamline their product lines they could potentially see their way through this. Currently they've got their server/desktop CPUs, their laptop/HTPC APUs and their Bobcat sub 17W lines. That's 3 entirely different CPUs, and then you get to their GPUs. Something has to go.
The problem is that AMD tries to sell HSA to everyone and his dog.
With AVX2's gather support and vector-vector shifts, the compiler can much more easily auto-vectorize loops. Any code targeted at the GPU will run very efficiently on a CPU with AVX2 support, without requiring inline assembly.
MIC is a backup plan incase Nvidia actually makes more than superficial inroads into the HPC market.
If Nvidia starts to accumulate the kinds of marketshare that Intel sees as jeopardizing its ability to command 60% gross margins with traditional fat-core Xeons then you will see Xeon Phi become more aggressively positioned.
What makes all this work for Intel is their access to process node technology that Nvidia does not have access to. That will keep Intel one step ahead if and when it needs to shift gears to make Xeon Phi their dominant product in HPC versus keeping it alive but in the background as plan B.
That's irrelevant when the ones calling it a standard are the ones defining and implementing it. HSA is a new specification only to satisfy AMD's needs. Meanwhile hsafoundation.com/standards is still barren.Still , that s an open standard.....
I'm not trying to convince anyone. It will sell itself. Every major compiler has already added AVX2 support. Auto-vectorization is being worked on as we speak. And many frameworks with AVX2 support will be available free of charge on the day of Haswell's launch..........while you re trying to convince everybody
and their dogs to buy Intel s proprietary standard using their in house
optimized compiler.....🙄
There's nothing GPGPU about those. They're graphics, graphics, and more graphics. So no need for a GPU that is optimized for General Purpose workloads.
In other words, these applications haven't made AMD's products any more attractive. Hence the effort they're putting into HSA isn't generating any revenue. It's a waste. Any other need for higher computing power for slightly less parallel workloads can be delivered by AVX2 and multi-core (supported by TSX technology).
Intel s proprietary standard using their in house
optimized compiler.....🙄
You really need to get off this Intel compiler thing. That's a really old, old argument that doesn't hold water anymore.
http://www.agner.org/optimize/blog/read.php?i=49#214It's not getting better. The latest version of Intel's SVML (small vector math library) has some functions that can only be called from processors with AVX because the input parameter is an AVX vector (YMM register). There is no logical reason why these functions should have a CPU dispatcher, yet they have two different code paths for the same instruction set: An optimized version for Intel processors with AVX and an inferior version for other brands of CPU with AVX.
Did you know that Intel's current compiler generates code better optimized for AMD processors than both GCC and Microsoft?
Leave my dancing classes out of this. I don't know what you think "the point" is, but to me it's AMD's declining profits and how HSA isn't helping them.I suppose you've taken dancing classes? You're quite good at dodging the point.
Again, those non-GPGPU workstation applications don't affect AMD's bottom line in any noticeable way.Compare GPGPU's benefits in those workstation programs with the potential benefits of AVX2. Will AVX2 replace them in the near future? No... not a chance.
That's ridiculous. GPGPU is about trying to increase throughput, but when the CPU's throughput is increased that's suddenly not needed? It seems like you're using double standards.Secondly, I actually give HSA and GPGPU more chance of surviving than I do Intel and AMD both. People aren't buying Intel products in droves and sales figures are actually slipping. AVX2 has benefits for certain crowds, but the whole of the market doesn't need more CPU throughput.
There's an economic crisis, people are waiting for Windows 8, and games aren't pushing the limits because the new consoles haven't arrived yet. Each of these will change over time. Don't mistake it for a general decline.There's a reason why PC sales are slipping and it's not to do with PCs not being fast enough.
Again, it doesn't specifically have to be AVX2. All I'm saying is that the heterogeneous computing which AMD is pursuing is a dead end for the consumer market and they should instead be looking into homogeneous high throughput technology. And yes, something akin to AVX2 can be very desirable for tablets too. Having a wide SIMD instruction set with gather support allows to vectorize any loop with independent iterations, which are a bottleneck in lots of software. Vectorization lowers power consumption, so it's something that should interest the mobile market. There will be 10 Watt Haswell parts with full AVX2 support, and the next generation will no doubt be even more power efficient and be suitable for an even wider market.Do you really think the same people buying tablets care about AVX2? A proprietary ISA attached to an overpriced processor tied to high power consumption and large form factors? Get real.
They would have a chance of survival if they stopped wasting money on HSA. Unfortunately Rory's memo seems to make that unlikely.As far as streamlining their product line, I think that's the only way AMD stands a chance of survival. Frankly, I think they're not going to survive 2013 if the current rumors are true -- delayed Kaveri and the 30% of their engineers on the chopping block.
Again, it doesn't specifically have to be AVX2. All I'm saying is that the heterogeneous computing which AMD is pursuing is a dead end for the consumer market and they should instead be looking into homogeneous high throughput technology. And yes, something akin to AVX2 can be very desirable for tablets too. Having a wide SIMD instruction set with gather support allows to vectorize any loop with independent iterations, which are a bottleneck in lots of software. Vectorization lowers power consumption, so it's something that should interest the mobile market. There will be 10 Watt Haswell parts with full AVX2 support, and the next generation will no doubt be even more power efficient and be suitable for an even wider market.
Specialists do not agree with you...
Boo-freaking-hoo. We're talking about a few percent here. What will be AMD's excuse when AVX2 routines are twice as fast or more and they simply don't have hardware that supports it?That doesnt remove the fact that they did implement
a Cpu throttling for anything that is not Intel...
thread derailed by benchpress and his AVX2 hype
*sight*
The fact that AVX2 is proprietary is irrelevant. They have a cross-licencing agreement, and antitrust rules also enable AMD to achieve ISA compatibility with Intel. Other ISAs are also free to widen their SIMD vectors and add GPU-like technology such as gather and FMA.
So there's no reason to go with heterogeneous computing over homogeneous computing just because the former claims to be more open. For all practical purposes, there's no fundamental difference in open-ness or standard-ness.
Wasn't there a court order or agreement of some sort that Intel wouldn't degrade AMD's performance with their compiler? Why isn't AMD making an issue about this? Most likely because there isn't an issue.
As always, AMD's problems are never of their own creation, are they?...It's always Intel's fault.
Ironically AMD has no problems owning up to their shortcomings. I guess more memo's are in order to get everyone on the same page 😀