• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[Fudzilla] Apple 2016 laptops will have AMD GPUs

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
How many people out there would really want to go through a warranty voiding process even if it was relatively safe? Maybe 10% at the very very most? What it would really do is generate a whole bunch of hype and people would see a tangible wave of excitement and "rush to the shelves" type situation if they heard there was some magic unlocking ability.

It's anybody's guess.

If the 480 is a cut down part and a 'full' part exists, what are the yields like for both? Will the expected demand for $199ish cards be high enough to warrant artificially disabled full parts? I hope so, for selfish reasons. 😀 Being able to unlock the 6950 was the sole reason I bought one instead of another card several years ago.

I can see the debates now: Buckets of overclocking vs parts unlocking!
 

The "Apple Metal in one sentence: one queue for both OpenCL and OpenGL" says that Metal is just "like" a OpenGL+OpenCL combination, but it does not use both at all.

At the the time OpenCL had no C++ headers, that's why. It even says it on your page :

it is compared with OpenCL 1.x + OpenGL 4.x, which it certainly can compete with, as that combination doesn’t have C++11 kernels nor a single queue.

btw: Metal is NOT a low level API like Vulkan/DX12. It just has lower overhead as it does not have to support 10000 years old hardware that nobody uses anymore .... D:
 
Mac Pros are tremendously low volume.



Exactly. The cheapest laptop is $2500 and the cheapest imac (27") $1800 (significantly more in other countries).
Right. For small MACs(11.6" - 13.3") they would probably remain IGP only.
It's a fact that apple is utilising Intel Quicksync in FCP to a certain extent.
Would be interesting to see if AMD gets its Firepros like it got D700s last time around. Probably they would get higher margins there.
 
AMD getting FirePRO cards into Mac Pros seems to have helped them on the professional front with companies like Adobe and the like. I would think keeping those relationships going would be high on AMD's priority list to keep everything accessible to everyone and keep proprietary stuff like Cuda out.
 
So no source huh?

For the mac pros?

No

Its called an educated guess based on the limited market for the item, poor upgrade potential/apple tends to forget them.

For other gpus its an educated guess - it is hard to find these specific results for apple products. Nonetheless it is an educated guess based on reasonable assumptions - the upgraded macbooks are extremely expensive and not available in all locations. The imac market is small.

Here is some more information.

mac-asp-q4-100565350-large.idge.jpg


The average selling price is around $1200. Apple is NOT selling a lot of expensive DGPU macs.
 
For the mac pros?

No

Its called an educated guess based on the limited market for the item, poor upgrade potential/apple tends to forget them.

For other gpus its an educated guess - it is hard to find these specific results for apple products. Nonetheless it is an educated guess based on reasonable assumptions - the upgraded macbooks are extremely expensive and not available in all locations. The imac market is small.

Here is some more information.

mac-asp-q4-100565350-large.idge.jpg


The average selling price is around $1200. Apple is NOT selling a lot of expensive DGPU macs.
So you dont know or have atleast some data backing you up?
 
Last edited:
Its a valid argument. But apple does NOT publish (or at least I have not found) hard breakdowns to their Mac sales. However, it can easily be inferred that few of Apple's macs ship with DGPUs.
Im inclined to believe what you say however you are using words like absolutely and making it sound as if it were a matter of fact.
 
Wow. AMD in Nintendo/Sony/Microsoft boxes current and future, AMD in Apple products, AMD in my rig later this month. 🙂 (hopefully) Then we have Vega and Zen in the future. All and all things are slowly but surely beginning to turn around.
 
Wow. AMD in Nintendo/Sony/Microsoft boxes current and future, AMD in Apple products, AMD in my rig later this month. 🙂 (hopefully) Then we have Vega and Zen in the future. All and all things are slowly but surely beginning to turn around.
Jerry Sanders built a great company back in the day, but he was a bit naive and eccentric. For the first time in god knows how many years, AMD actually have a great CEO. Really impressed with Lisa so far.
 
Jerry Sanders built a great company back in the day, but he was a bit naive and eccentric. For the first time in god knows how many years, AMD actually have a great CEO. Really impressed with Lisa so far.

Maybe she has great social skills. A very keen eye to see through people.
 
It's anybody's guess.

If the 480 is a cut down part and a 'full' part exists, what are the yields like for both? Will the expected demand for $199ish cards be high enough to warrant artificially disabled full parts?

A non-functional core or CU isn't the only reason for disabling a part. Not all cores or CUs are capable of same performance|efficiency so even if you have 40 working CUs, you might disable the worst performing 4 if at least some set number don't meet a particular performance threshold.

It's likely that there will be a full Polaris 10 at some point, but it's mostly a factor of what kind of performance they want out of it and the process becoming mature enough to support the inventory numbers that they want. It may be that they want the 480X to be a 1070 competitor and right now they can't get enough 40 CU chips that can hit the clock speeds they need in order to have comparable performance.
 
It is a bummer if you are stuck with Adobe products on Apple as they usually tend to perform better under CUDA than OpenCL. I would love to see Adobe optimize their CC suite the way Apple optimize Final Cut Pro with OpenCL/Quicksync.
 
It is a bummer if you are stuck with Adobe products on Apple as they usually tend to perform better under CUDA than OpenCL. I would love to see Adobe optimize their CC suite the way Apple optimize Final Cut Pro with OpenCL/Quicksync.

OpenCL performs better on AMD than Nvidia. And everything's going to Metal anyway.
 
It is a bummer if you are stuck with Adobe products on Apple as they usually tend to perform better under CUDA than OpenCL. I would love to see Adobe optimize their CC suite the way Apple optimize Final Cut Pro with OpenCL/Quicksync.

This should probably be changing gradually if nvidia is indeed phased out of another generation of Apple products.
 
Are you sure?
According wikipedia the first GCN parts were released in 2011, while NVIDIA introduced bindless graphics in 2009: http://developer.download.nvidia.com/opengl/tutorials/bindless_graphics.pdf

Bindless graphics =/= Bindless resources/textures

I'm aware that Nvidia introduced bindless vertex attribute and element arrays but that is not nearly enough to do modern bindless rendering when a good portion of the time is the driver trying to manage texture and sampler states ...

There's a good reason why Microsoft made the ExecuteIndirect functionality exclusive to hardware tier 2 and above ...
 
OpenCL performs better on AMD than Nvidia. And everything's going to Metal anyway.

I never said otherwise. But Adobe products perform better under CUDA than OpenCL. Last time I checked, it was about 20%-30% faster under CUDA vs OpenCL regardless of the hardware used.
 
I don't think CUDA even runs on AMD hardware. I am just observing that Adobe products run better on CUDA than they do on OpenCL.
 
I don't think CUDA even runs on AMD hardware. I am just observing that Adobe products run better on CUDA than they do on OpenCL.
There are really not that many benchmarks for that stuff, and the ones that are out there are pretty outdated as well.

Not that long ago Linus did a bunch of transcoding tests with Adobe Premier and while results looked pretty comparable between the two, with Nvidia pulling ahead in some tests and AMD pulling ahead in others the r9 390 looked like the best bang per buck.

https://www.youtube.com/watch?v=g7cQK8jFPzo
 
I don't think CUDA even runs on AMD hardware. I am just observing that Adobe products run better on CUDA than they do on OpenCL.

...But NVidia's OpenCL support is pretty bad. Hell, are there any NVidia cards that do OpenCL 1.2? Maxwell only supports 1.1. Comparing the two, and saying that Cuda is better, is just bad; it's not that Cuda is faster, but that NVidia's implementation of OpenCL is borked.


And cuda won't even run on a CPU nor an AMD card, whereas OpenCL will.

Hell, isn't OpenCL parallel by nature? It's gotta work wonders even if you are just running a quad core with some crappy passive gpu.
 
Back
Top