[SA] News of Nvidia’s Pascal tapeout and silicon is important

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

96Firebird

Diamond Member
Nov 8, 2010
5,746
342
126
The difference between S/A and the other "rumor" sites is that S/A is ad free. The site is supported purely by subscribers. While other sites are looking for clicks S/A isn't. So, as you say, while other sites just throw whatever rubbish floating around out there so people will click away. He needs people to feel like they have good info after clicking on his site.

He sells with sensationalism; to those who want to believe what he writes.

Sounds like you're his perfect audience.
 
Mar 10, 2006
11,715
2,012
126
They didn't used to be though. They had huge market share though over upstart AMD and paid people to not use AMD CPU's, even though they were superior. Then came Core2 and the rest is history.

AMD was selling every CPU that it could make back in those days.
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
He sells with sensationalism; to those who want to believe what he writes.

Sounds like you're his perfect audience.

No, he's dead on. Charlie needs people to feel like he's giving good info they couldn't get elsewhere. It's pretty obvious from most of that article tbh. It's a huge justification for his existence and "look what I do that they don't" grandstanding.
 

moonbogg

Lifer
Jan 8, 2011
10,732
3,449
136
Intel was like gamer repellant back in the day. I wanted nothing to do with intel because it was a total piece of CRAP.

I had:

XP 2400+
A64 3800+
FX-57

The FX-57 was interesting because it was the very moment that single core CPU's basically jumped the shark as was concluded in this anandtech review.
I had friends who already jumped on the dual core X2 bandwagon, but I knew better. Dual core simply hadn't caught on yet at all, not for gaming at least. I wisely held out until buying my first dual core, the Core 2 Duo E6850, then replacing it with the E8400 and had that until 2600k (biggest upgrade any human ever experienced) and now 3930k.
This has been a brief history for no reason. Sorry.
 
Last edited:

xpea

Senior member
Feb 14, 2014
458
156
116
This is a good point, and there's one thing worth pointing out. Those tools are for that particular chip, not for every pascal part. That does not necessarily mean that other chips aren't already being set up. There is room for goofy scenarios like big pascal and a small pipe cleaner being first priority with that being a lagging mid sized chip, but I personally doubt big pascal will reach yields that allow consumer pricing for months, and I think it will come out a half generation late because of it.
^^THIS

I'm amazed that nobody, nobody, NO-BO-DY express the fact that this Charlies article is solely based on Zauba manifest, as if all freshly TSMC baked Nv GPU must go first to their Bangalore office :rolleyes:

Comon guys, repeat after me: Nvidia is a Silicon valley based company with R&D headquarters in California, not India !!!

Outside green team engineers and executives, at this point, nearly everything about Pascal is pure speculation. But 2 things are for granted, 1/ new Nv GPUs will go first to Santa Clara, not Bangalore and 2/ Erinyes from beyon3d has a solid track record when comes to Nvidia products. I trust him 1000 times more than Charlie and his too many to count false predictions.

So until someone can track the stuff coming to Nv Santa Clara office, we don't know what's the real Pascal status...
 
Last edited:

S.H.O.D.A.N.

Senior member
Mar 22, 2014
205
0
41
I'm amazed that nobody, nobody, NO-BO-DY express the fact that this Charlies article is solely based on Zauba manifest

It's based on the Zauba shipment manifest because that's exactly what other sites used a solid proof that Pascal silicon exists and is in fact being shipped around the world.
 
Feb 19, 2009
10,457
10
76
I'd like an intel GPU. That would be badass. Imagine a cool looking heat sink with a skull on it or something. That would be neato.

No you would not because Intel is giving you the tiniest freaken chip they can at maximum prices. If Intel has a dominance in any market, expect consumers to lose out on innovation, performance leaps and be saddled with 5% gains at each generation while prices goes up.

Sorry for Intel lovers, but that's the reality and it's why Intel is so filthy rich.

I much prefer AMD vs NV in the graphics world, keeping each other honest and consumers benefit.

ps. I've used Intel CPUs for the past 10 years. ;)
 
Last edited:
Feb 19, 2009
10,457
10
76
It's based on the Zauba shipment manifest because that's exactly what other sites used a solid proof that Pascal silicon exists and is in fact being shipped around the world.

I think that was lost on some. SA's article was a rebuke of all the "tech sites" that regurgitate "Pascal GP100 is coming in APRIL!!!" or "Pascal taped out in June 2015!".

Whether you agree with his analysis or not, there's zero evidence of any of those statements being true.

There are however, evidence that suggests they are not, like JHH showing a fake card like the woodscrew Fermi.

Either way, nothing is concrete, so you can take whatever you want from it. Me, IF we get Pascal Titan this year, it would be a miracle.
 
Mar 10, 2006
11,715
2,012
126
No you would not because Intel is giving you the tiniest freaken chip they can at maximum prices. If Intel has a dominance in any market, expect consumers to lose out on innovation, performance leaps and be saddled with 5% gains at each generation while prices goes up.

Sorry for Intel lovers, but that's the reality and it's why Intel is so filthy rich.

I much prefer AMD vs NV in the graphics world, keeping each other honest and consumers benefit.

ps. I've used Intel CPUs for the past 10 years. ;)

Your understanding of this business seems very...out of line with reality :p
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
No you would not because Intel is giving you the tiniest freaken chip they can at maximum prices.

This is one thing a lot of people don't seem to realize. The mainstream Skylake CPU is a tiny chip - only about 122mm^2. That's nothing by GPU standards; $99 cards like the GTX 750 have a bigger die than this. You have to go back to Sandy Bridge before the mainstream Intel CPU had a die even as big as Pitcairn. (Sandy Bridge-E, with its 8 cores and ginormous L3 cache, was 435mm^2, about the same size as Hawaii. And even the $999 consumer part only had 6 of those 8 cores enabled.)

Incidentally, this fact puts Intel's "process lead" in perspective - they still haven't produced any genuinely big-die chips on 14nm. And the long delay for Broadwell-E points to yield issues as the likely cause.

Competition in the GPU market has forced AMD and Nvidia to give us a lot more transistors for the dollar than Intel does in the CPU arena.
 
Feb 19, 2009
10,457
10
76
This is one thing a lot of people don't seem to realize. The mainstream Skylake CPU is a tiny chip - only about 122mm^2. That's nothing by GPU standards; $99 cards like the GTX 750 have a bigger die than this. You have to go back to Sandy Bridge before the mainstream Intel CPU had a die even as big as Pitcairn. (Sandy Bridge-E, with its 8 cores and ginormous L3 cache, was 435mm^2, about the same size as Hawaii. And even the $999 consumer part only had 6 of those 8 cores enabled.)

Incidentally, this fact puts Intel's "process lead" in perspective - they still haven't produced any genuinely big-die chips on 14nm. And the long delay for Broadwell-E points to yield issues as the likely cause.

Competition in the GPU market has forced AMD and Nvidia to give us a lot more transistors for the dollar than Intel does in the CPU arena.

Explains my sentiments perfectly.

Intel is not interested in giving consumers great performance leaps for similar $$. They are only interested in how to maximize profit and that's selling the smallest chip for the max price with each new generation.

In the interest of gamers & consumers, I much prefer AMD vs NV to compete fiercely on graphics. Hopefully Zen will actually be competitive vs Intel to give them a kick in the butt to actually start to make GREAT CPUs again not this ittibitty refreshes +5% here and there and jacking up the price!
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
This is one thing a lot of people don't seem to realize. The mainstream Skylake CPU is a tiny chip - only about 122mm^2. That's nothing by GPU standards; $99 cards like the GTX 750 have a bigger die than this. You have to go back to Sandy Bridge before the mainstream Intel CPU had a die even as big as Pitcairn. (Sandy Bridge-E, with its 8 cores and ginormous L3 cache, was 435mm^2, about the same size as Hawaii. And even the $999 consumer part only had 6 of those 8 cores enabled.)

Incidentally, this fact puts Intel's "process lead" in perspective - they still haven't produced any genuinely big-die chips on 14nm. And the long delay for Broadwell-E points to yield issues as the likely cause.

Competition in the GPU market has forced AMD and Nvidia to give us a lot more transistors for the dollar than Intel does in the CPU arena.

The only problem with this is what would Intel do with more transistors. With GPU's, they can throw transistors at more cores and see gains. With CPU's, which are used in a far more serial manner, adding more cores does not improve performance in most cases. Adding transistors to try and speed up linear tasks won't speed anything up. Adding more cores when they aren't needed, results in slower cores.

CPU's are a different animal, and because of this, progress is slower. They need better architecture improvements to see faster speeds. They can't brute force it like GPU's can.
 
Feb 19, 2009
10,457
10
76
The only problem with this is what would Intel do with more transistors. With GPU's, they can throw transistors at more cores and see gains. With CPU's, which are used in a far more serial manner, adding more cores does not improve performance in most cases. Adding transistors to try and speed up linear tasks won't speed anything up. Adding more cores when they aren't needed, results in slower cores.

CPU's are a different animal, and because of this, progress is slower. They need better architecture improvements to see faster speeds. They can't brute force it like GPU's can.

You know what they can do?

Instead of having the ~122mm2 Skylake with 4 cores, make it ~200mm2 with 8 cores.

Sell that for the same price. We all benefit.

Fact is they don't want to and nobody is making them "have to", ie. no competition. Selling a tiny chip is much more profitable.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
You know what they can do?

Instead of having the ~122mm2 Skylake with 4 cores, make it ~200mm2 with 8 cores.

Sell that for the same price. We all benefit.

Fact is they don't want to and nobody is making them "have to", ie. no competition. Selling a tiny chip is much more profitable.

And for a lot of tasks it would be slower, and faster in others. More cores results in lower clock rates. It's a balancing game.
 
Feb 19, 2009
10,457
10
76
And for a lot of tasks it would be slower, and faster in others. More cores results in lower clock rates. It's a balancing game.

More cores do not result in lower clocks if you actually allow a higher TDP, instead of having desktop high-end part at 65W or below, bump it to 95W, give it 8 cores, bigger chip, same price. Faster performance all round.

Intel's focus has been die shrinks = more profit selling smaller die for higher $.
 

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
More cores do not result in lower clocks if you actually allow a higher TDP, instead of having desktop high-end part at 65W or below, bump it to 95W, give it 8 cores, bigger chip, same price. Faster performance all round.

Intel's focus has been die shrinks = more profit selling smaller die for higher $.

Wait...isn't it that that their yields are low, and development costs are skyrocketing for each new node?
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Wait...isn't it that that their yields are low, and development costs are skyrocketing for each new node?

part of the problem I think is that they are using process shrinks to gain performance, rather than making better chips overall. I mean its a bit silly to go to 14nm and still have only quad cores with so-so performance gains. Its also why zen stands a chance because ultimately if zen is more than quad core and similar single thread performance, automatically its winning.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
More cores do not result in lower clocks if you actually allow a higher TDP, instead of having desktop high-end part at 65W or below, bump it to 95W, give it 8 cores, bigger chip, same price. Faster performance all round.

Intel's focus has been die shrinks = more profit selling smaller die for higher $.

You should know that when OC'ed, the 6 core CPU's have a bit lower results, and 8 core CPU's have even lower high in OC's. It's not huge, but they progressively get lower OC's the more cores the CPU has.

If they jumped to 6 cores standard, for most uses, no one would know. I wouldn't be surprised, however, if 6 cores start becoming more common, but then what will happen, we continue to see small little improvements each new generation. Unless a program benefits from more cores, it doesn't help to add more, and most personal PC programs don't benefit from more cores. If it did, AMD's 8 core CPU's would be doing a lot better than they are.
 

gamervivek

Senior member
Jan 17, 2011
490
53
91
Why is AMD being brought into this threat at all. They have nothing to do with anything in the OP.

Well Charlie did mention AMD later as well because they're the comparison with which nvidia is 'late'.

I thought it was great that AMD were demoing working chips to the press in january, but apparently they already had done so back in early december when the RTG and HDR/freesync etc. was announced.

AMD showed off functional Polaris silicon with multiple working devices in early December. It wasn’t rough, it wasn’t a static demo, it had drivers, cards, and all the things you would expect from non-first silicon. A month later Nvidia did not have silicon and lied to a room of analysts and press about it, and there was no way their CEO would be unaware of such a major milestone at his company.

Pcper also alluded to it in their CES review.
First, we got to see the upcoming Polaris GPU architecture in action running Star Wars Battlefront with some power meters hooked up. This is a similar demo to what I saw in Sonoma back in December,
http://www.pcper.com/news/Graphics-...laris-Architecture-and-HDMI-FreeSync-Displays

Much of what I learned during the RTG Summit in Sonoma is under NDA and will likely be so for some time. We learned about the future architectures, direction and product theories that will find their way into a range of solutions available in 2016 and 2017.
http://www.pcper.com/reviews/Graphics-Cards/Radeon-Technologies-Group-Update-2016-FreeSync-and-HDR

No wonder Koduri sounded so confident about being ahead of nvidia by a decent margin.
 

moonbogg

Lifer
Jan 8, 2011
10,732
3,449
136
No you would not because Intel is giving you the tiniest freaken chip they can at maximum prices. If Intel has a dominance in any market, expect consumers to lose out on innovation, performance leaps and be saddled with 5% gains at each generation while prices goes up.

Sorry for Intel lovers, but that's the reality and it's why Intel is so filthy rich.

I much prefer AMD vs NV in the graphics world, keeping each other honest and consumers benefit.

ps. I've used Intel CPUs for the past 10 years. ;)

I can't believe you said all this stuff. OMG
 
Mar 10, 2006
11,715
2,012
126
You know what they can do?

Instead of having the ~122mm2 Skylake with 4 cores, make it ~200mm2 with 8 cores.

Sell that for the same price. We all benefit.

Fact is they don't want to and nobody is making them "have to", ie. no competition. Selling a tiny chip is much more profitable.

Oh jeez, here we go...

By increasing from 4 cores to 8 cores, you immediately see one of two things happen:

1.) At high per-core frequency, power consumption skyrockets to potentially untenable levels.

2.) At low per-core frequency, power consumption is manageable but you trade-off single-threaded performance, a bad trade-off given what most people actually do with their PCs.

Also, you should realize that with GPUs it's very easy to get a big speedup by just "throwing moar cores" at things, but the vast majority of client software is very difficult to truly get to take advantage of multiple CPU cores.

Another thing to note is that GPUs need to run at fairly low frequencies, while the CPUs that you're pissing all over run at >= 4GHz. Building chips that can actually make the cut in terms of frequency/power consumption is no small feat...which is probably why we saw very poor availability of the 6700K early on.

A lot of serious R&D work goes into designing & validating a high performance MPU. Just looking at die sizes and going, "ZOMG INTEL IS NOT TRYING CUZ NO COMPETITION" just shows that you are not really aware of the very real challenges required to make 5-10% improvements each year on an industry leading high-IPC design while maintaining extremely high frequencies.

Anyway, the performance delivered by something like a 6700K is extremely good and I can guarantee you that Intel designed the best CPU that they could have given the quite vast resources they allocate to MPU development each year.

p.s. wafer costs go up each year, so cost/mm^2 of silicon goes up. Why not measure in terms of how many transistors Intel gives you per year? ;)
 
Last edited:
Status
Not open for further replies.