• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Apple dumps NVidia from the Macbook Pro

I cant believe they are charging $500 to upgrade from intel iris to a lowly M370X. They are basically shaving $100 off the cpu cost and adding a $200 gpu for a total net cost of $100 and gouging the heck out of the consumer for $400.
 
I cant believe they are charging $500 to upgrade from intel iris to a lowly M370X. They are basically shaving $100 off the cpu cost and adding a $200 gpu for a total net cost of $100 and gouging the heck out of the consumer for $400.

You do realize that more expensive one has a faster CPU and double the storage space right, plus the better GPU.

The cost is not out of line.
 
There was a leak in december, if thats to be trusted the M370X would be around what a cutdown Tonga would deliver.

16592-ec6a34f8.jpg
 
Apple pushes OpenCL, it was to be expected.

Dont know what Apple will do when NV shows them new shinning slides of pascal claiming x flops in FP16. They probably will laugh their *sses off and dump them from their product stack, again.
 
With the great efficiency of Maxwell, I really dont understand why Apple went back to AMD, unless like others said it has to deal with open GL vs CUDA. Either that or there is something political going on behind the scenes.
 
I cant believe they are charging $500 to upgrade from intel iris to a lowly M370X. They are basically shaving $100 off the cpu cost and adding a $200 gpu for a total net cost of $100 and gouging the heck out of the consumer for $400.

It is a $300 upgrade from 256GB to 512GB SSD...
 
Maxwell efficiency in gaming and not in GPGPU (when it drastically bombs in performance to stay within their alledged gaming tdp) probably has to do something with it. In FP32 maxwell is roughly 0.75x perf for 0.5x power consumption versus kepler.
 
Maxwell efficiency in gaming and not in GPGPU (when it drastically bombs in performance to stay within their alledged gaming tdp) probably has to do something with it. In FP32 maxwell is roughly 0.75x perf for 0.5x power consumption versus kepler.

nVidia doesnt support OpenCL 2.0, whereas AMD/Intel do. This was most likely a big reason as well.
 
Maxwell efficiency in gaming and not in GPGPU (when it drastically bombs in performance to stay within their alledged gaming tdp) probably has to do something with it. In FP32 maxwell is roughly 0.75x perf for 0.5x power consumption versus kepler.

People are still recycling this old argument which doesn't pan out.

http://www.tomshardware.com/reviews/geforce-gtx-750-ti-review,3750-16.html

http://www.tomshardware.com/reviews/geforce-gtx-750-ti-review,3750-17.html

750 ti is quite competitive vs. Bonaire and Pitcarin at SP OpenCL compute.

All while using solidly less power.

07-Power-Consumption-Peak.png


There is no bomb in performance or clocks. (Toms 970/980 review is plagued by problems and they later redacted part of their data - which was brought up many times on these forums). Also Ryan Smith (post history) stated that ATs reference 980 throttled back but never went over TDP for ATs compute benchmarks.
 
With the great efficiency of Maxwell, I really dont understand why Apple went back to AMD, unless like others said it has to deal with open GL vs CUDA. Either that or there is something political going on behind the scenes.

Nothing political, just money.

The performance/watt of maxwell isn't as great in compute as it is in graphics, which apple doesn't care about at all, since you can't really game on macs anyway, so it's not worth the premium.
 
Sorry, not taking about benchs here but real performance. CUDA devs had to rewrite quite big portions of their software (without tech papers of maxwell v2,to their dismay) to acomodate to the new memory and cache system and to make maxwell at least competitive with kepler products. For each software you show a bench of doing "ok" performance I can show a real world software that has either abysmal maxwell performance o has to be reworked to make maxwell competitive. Right now lots of prosumers are still on kepler because maxwell performance is inconsistent and they are still waiting their entire production software stack to acomodate to the changes in maxwell. Seeing apple is all about a tight circle between software and hardware, inconsistency or rewriting your software around 1 generation of products is undesirable. Why bother if the competition has performance consistency in the api you really use (OpenCL)?

Power efficiency is a gaminv fad right now. Translating it into the prosumer world and thinking the same argunent will hold is just downright silly.
 
Not sure why that matters to Apple. Moving to AMD cuts out CUDA completely.

As an Aside can you PM me some links for those reworks? I'm curious.
 
It must have DP 1.3. I don't see a possibility to push 5K at 60Hz on single singnal otherwise.
 
People are still recycling this old argument which doesn't pan out.

http://www.tomshardware.com/reviews/geforce-gtx-750-ti-review,3750-16.html

http://www.tomshardware.com/reviews/geforce-gtx-750-ti-review,3750-17.html

750 ti is quite competitive vs. Bonaire and Pitcarin at SP OpenCL compute.

All while using solidly less power.

07-Power-Consumption-Peak.png


There is no bomb in performance or clocks. (Toms 970/980 review is plagued by problems and they later redacted part of their data - which was brought up many times on these forums). Also Ryan Smith (post history) stated that ATs reference 980 throttled back but never went over TDP for ATs compute benchmarks.

You are comparing desktop chips to mobiles? The 750Ti is not a mobile chip, its performance does not represent a 750M.

Also nVidia's OpenCL performance does not compare to AMDs, and the fact that they JUST NOW started to support OpenCL 1.2 (Which AMD supported YEARS AGO) is another issue. Apple has been pushing for 2.0 support, which Intel and AMD have.
 
Apple really pushes OpenCL? Final Cut Pro is about the only major product from Apple that uses OpenCL from what I have seen. I'd guess the real reason is AMD gave Apple a deal they couldn't resist. You know, why AMD's margins will remain 50% lower than their competition.
 
With the great efficiency of Maxwell, I really dont understand why Apple went back to AMD, unless like others said it has to deal with open GL vs CUDA.

It's simple. The 15" Macbook Pro is a "professional" product just like iMac and Mac Pro. The software partners are now optimizing their product to GCN, and Apple just use the ideal GPU-arch in the new Macbook Pro for their needs. In this case they don't need to optimize for Maxwell and Kepler, just for Intel Gen7.5/Gen8 and AMD GCN. Focusing for just two base GPU-arch is much simpler for them.
 
You know why Apple could chose AMD over Nvidia? First is the cash. Second is the... Freesync technology. Retina 4K and 5K Thunderbolt Displays...
 
Back
Top