• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[Kitguru]Nvidia`s big Pascal GP100 have taped out - Q1 2016 release

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Likely fake. I don't think the market's ready for something like a $1500 Titan. I would think they would be more gentile with the price hikes.

I think that NVIDIA should be able to sell a GP100 w/ 16GB of HBM2 at the $1K price point. NV can use the current Maxwell chips, which are still quite efficient, at lower price points until it can roll out the rest of the Pascal family.
 
I suppose it depends whether you like using your card to play games, or to argue Green vs Red.

It does not. Again, it's a simple fact.
AMD cards improved due to drivers.

That's it. I'm not saying anything else.

When you say "Nvidia cards got worse", that's wrong. They didn't get slower, they didn't magically suck. AMD improved their drivers and improved their GPU performance.

No spin, nothing else.

When you start to skew what is actually happening, we're not having a conversation based in reality. We're then having a conversation based on a false statement which is just useless to have.

I don't care which "team" wins. It's irrelevant to me, I buy whatever GPU I need. With Gsync/Freesync, the options are already chosen for me.
 
It does not. Again, it's a simple fact.
AMD cards improved due to drivers.

That's it. I'm not saying anything else.

When you say "Nvidia cards got worse", that's wrong. They didn't get slower, they didn't magically suck. AMD improved their drivers and improved their GPU performance.

No spin, nothing else.

When you start to skew what is actually happening, we're not having a conversation based in reality. We're then having a conversation based on a false statement which is just useless to have.

I don't care which "team" wins. It's irrelevant to me, I buy whatever GPU I need. With Gsync/Freesync, the options are already chosen for me.

Come on! AMD drivers improving to outperform nVidia's? "Impossibru Bro!" 😉
 
It does not. Again, it's a simple fact.
AMD cards improved due to drivers.

That's it. I'm not saying anything else.

When you say "Nvidia cards got worse", that's wrong. They didn't get slower, they didn't magically suck. AMD improved their drivers and improved their GPU performance.

No spin, nothing else.

When you start to skew what is actually happening, we're not having a conversation based in reality. We're then having a conversation based on a false statement which is just useless to have.

I don't care which "team" wins. It's irrelevant to me, I buy whatever GPU I need. With Gsync/Freesync, the options are already chosen for me.

Reality vs what people argue about on forums are often wildly different, I'm afraid. Increasing AMD's performance over time won't somehow make the experience of an nVidia user worse as they actually game. Unfortunately, for some people actually pushing pixels with a GPU is less important than arguing about which team is winning. For them, AMD improving their package does hurt nVidia (or vise versa) since all they really care about is arguing the relative position of the two companies.
 
I think we could see some shifts. Before kepler nvidia's power consumption was epic. I don't think people realize what nvidia had to do to get to kepler and have launch advantages over AMD. If they are pushing pascal as a compute GPU, they might be reversing some of those cuts and that could mean pascal will be a power hungry chip at least. Like someone said before, overclocking could take a dive etc.

http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/3

If they keep their software scheduler they can probably stay ahead of AMD on performance when asynchronous compute is out of use. That might be the most important. The gains on efficiency they get from investing in drivers is their biggest advantage and things point to that possibly going away next year. AMD ended up ahead sooner or later against kepler and probably most of the maxwell 2 cards, but next year might be the first time in a while they have a generational lead (no new GPU from nvidia on the same architecture wins). Mostly because they have the experience on HBM, dx12 like APIs and of making these full chips while fighting for efficiency. Nvidia dropped that fight with kepler and might pick it up, years later, in 2016.

This might not matter to nvidia though. A minor loss in consumer graphics won't matter if they sell more GPUs at higher prices to professionals.

I was quite surprised when I saw Mahigan talking about this on ocn, I was a bit out of loop after the fermi release and was quite surprised that AMD and nvidia had switched positions on the scheduling front.

Before Fiji's launch it was apparent that it will boil down to how well AMD cards overclock, right now it looks like it'd be again decided by how big or small the difference is in clockspeeds between the two.
 
Complete and utterly wrong, because we have an absolute reference on the matter of improve or losing performance.

That depends on whether you are comparing NV to AMD or NV to NV.

If you're comparing the relative performance between the two companies, then a gain by AMD will make NV's comparative performance worse. If you're comparing NV to its historical performance, then yes, you do have an absolute reference.

It all depends on what you are trying to use the information to determine.
 
While they do make A9s. The question is cost. Apple seems to be willing to pay 100 or even 200$ for every ~100mm2 A9.

Where the hell are you getting your numbers from, those figures don't make a lick of sense.

I'd be very surprised if they are paying much more than $50 per SoC, and that would be generous.
 
Where the hell are you getting your numbers from, those figures don't make a lick of sense.

I'd be very surprised if they are paying much more than $50 per SoC, and that would be generous.

Wafer cost, Yield, R&D etc.

0911CderFig2_Wafer_Cost.jpg
 
Inference based on questionable reasoning then?

I'm pretty sure it would be huge news if the BoM on the 6s had doubled over the 6. Has anyone heard any news to that effect?
 
Inference based on questionable reasoning then?

I'm pretty sure it would be huge news if the BoM on the 6s had doubled over the 6. Has anyone heard any news to that effect?

The one estimation I had seen had the manufacturing cost of the A9 at $20-25. Apple keeps such a tight lid on things that we don't even really know if that's even close to being accurate and of course it doesn't include the R&D to design the chip which had to be massive esp given it's at two foundries.

Either way I don't think it's relevant to GPUs since Apple's volume is so much more than both AMD+nVidia combined that pricing isn't comparable.
 
The one estimation I had seen had the manufacturing cost of the A9 at $20-25. Apple keeps such a tight lid on things that we don't even really know if that's even close to being accurate and of course it doesn't include the R&D to design the chip which had to be massive esp given it's at two foundries.

Either way I don't think it's relevant to GPUs since Apple's volume is so much more than both AMD+nVidia combined that pricing isn't comparable.
Shintai threw me by talking about cost per SoC.

Who knows what the A9 will end up costing apple in total.

R&D is a bit nebulous, and difficult to integrate into a conversation about wafer cost, before a particular part is EoL.

$20-25 makes sense to me. And from there you can start to compare nvidia and amd's position.
 
A8 was ~37$ just in raw manufactoring cost, excluding all other expenses.
Apple-iPhone-5s-6-and-6-Plus-Teardown-cost-chart.png


A8 is smaller than A9. Uses a process that is much cheaper. Got higher yield.

A9 is pretty much equal in size to a 8MB cache dualcore GT2 Skylake (Bigger than Core M). But without being close to the same yields.

And then there is the R&D on top.

As long as Apple can ignore cost they can keep it going.
 
Last edited:
A8 was ~37$ just in raw manufactoring cost, excluding all other expenses.
Apple-iPhone-5s-6-and-6-Plus-Teardown-cost-chart.png


A8 is smaller than A9. Uses a process that is much cheaper. Got higher yield.

A9 is pretty much equal in size to a 8MB cache dualcore GT2 Skylake (Bigger than Core M). But without being close to the same yields.

And then there is the R&D on top.

As long as Apple can ignore cost they can keep it going.

Apple A7 (104mm2) on the iPhone 5s made at 28nm cost $36

Apple A8 (89mm2) on the iPhone 6 made at 20nm cost $37

Apple A9 (96mm2) on the iPhone 6s made at 14nm FF cost $37

So, 14nm FF is not that much more expensive than it was though it would.
 
A8 was ~37$ just in raw manufactoring cost, excluding all other expenses.
A8 is smaller than A9. Uses a process that is much cheaper. Got higher yield.

I would actually have to think the manufacturing cost of 16FF would be cheaper than 20 planar. It's not really a new node. Granted the design cost would be much more, which is obviously a bigger problem for AMD+nVidia than Apple.

Still, this isn't relevant to GPUs.
 
Apple A7 (104mm2) on the iPhone 5s made at 28nm cost $36

Apple A8 (89mm2) on the iPhone 6 made at 20nm cost $37

Apple A9 (96mm2) on the iPhone 6s made at 14nm FF cost $37

So, 14nm FF is not that much more expensive than it was though it would.

I think the numbers from iSuppli are basically made up.
 
Back
Top