ShintaiDK
Lifer
im not acting nothing its their words not mine they didnt said "oh we threw 3 bill to the pascal arch" they said we threw 3bill into the "hpc"
So Pascal got zero gaming improvements. Because all R&D is spend on HPC. Got it.
im not acting nothing its their words not mine they didnt said "oh we threw 3 bill to the pascal arch" they said we threw 3bill into the "hpc"
If some people still don't understand that Nvidia didn't raise the prices to cover R&D cost, maybe this will help you understand what many people here said
Code:Year Ending Year Ending Jan 2013 Jan 2011 Revenue 4.28B 3.54B Cost of Revenue 1.83B 1.95B Gross Operating Profit 2.45B 1.60B
Year Ending 2011 - few months after Fermi GPU release, Year Ending 2013 - few months after Kepler release where Nvidia doubled or nearly doubled prices. Do you see what happened back in 2012? Their Gross Operating Profit skyrocketed 😀
the hell are you talking about? did i mention anything about gaming perfomance i said that their ceo said that they threw 3 bill to the hpc department and that is only why you twist everything just to suit your needs? get real for onceSo Pascal got zero gaming improvements. Because all R&D is spend on HPC. Got it.
Oh that's funny,what about all the threads moaning about console ports being shoddy and running at cinematic framerates.
Seems like you are trying to paint people with a wide brush. I have no issue paying money for a product I deem I will use. And I have zero issues buying those quirky JRPG's that are finally get PC versions realsed because, I know I will enjoy them even if the one I'm currently playing is essentially a PS Vita port locked at 1080p and 60 FPS.<rest of post>
Have fun!!😀
I think what people are missing here is what perf/mm2 might nv be getting from uarch changes. They already had their perf/mm2 jump with maxwell and most people are arguing what big architectural changes will be introduced with pascal, if any. They also misjudge that amd's best perf/mm2 isnt fiji but rather hawaii and that is including a juicy big 512b gddr5 PHY.
Honestly I wouldnt dismiss an scenario where amd's 232mm2 die is 5% within of nvidia's 310mm die, factoring in 14LPP slight density advantage over 16FF+, that AMD this round is doing the maxwell and nvidia is doing the GCN type of changes, which have totally diferent impacts on perf/mm2.
Seriously guys, you need to stop this nonsense. P100 clocks around 1500Mhz with 15 billion transistors. While AMD used only a 850Mhz Polaris 10 for their first demo. nVidia will have a huge clock advantages over AMD. I guess much bigger than it was with Maxwell over Tonga/Hawaii/Fiji.
the hell are you talking about? did i mention anything about gaming perfomance i said that their ceo said that they threw 3 bill to the hpc department and that is only why you twist everything just to suit your needs? get real for once
The first is that we don't know what the max clock rate is for AMD (or if it even matters given the alleged SMT-like technology they might be using) only that they've shown it's at least 850 MHz. For all we know they could be sand-bagging.
The bigger mistake though is that clockspeed is meaningless when discussing different architectures. Apple's SoCs have almost always had lower clock speed than other SoCs, yet typically have better performance. Intel's NetBurst (Pentium 4) microarchitecture was also capable of higher clock speeds than their previous generation of chips as well as AMD's offerings, but they typically didn't perform as well even though they could clock significantly higher.
Clock speed increases in GP100 are only useful when comparing it to Maxwell, and even that only holds true as long as the architecture hasn't changed significantly. We can think of it as a lower-bound for performance gains, but it doesn't tell us how it will compare against Polaris without having more details about its performance.
go see the interviewNo, they spend 2-3B on Pascal.
go see the interview
no you didnt because they never said anything about this on the event it was an interview on a site D:I saw the event. Spending 3B$ on Pascal HPC only would equal all Nvidia R&D for 2½ years.
Sure, AMD will come within 5% when they yet need a bigger die with nearly twice the power.
Seriously guys, you need to stop this nonsense. P100 clocks around 1500Mhz with 15 billion transistors. While AMD used only a 850Mhz Polaris 10 for their first demo. nVidia will have a huge clock advantages over AMD. I guess much bigger than it was with Maxwell over Tonga/Hawaii/Fiji.
BTW: Hawaii needs the 512bit interface because it doesnt have any delta compression.
no you didnt because they never said anything about this on the event it was an interview on a site D:
Clock rate matters with GPUs. When you can clock your GPU higher while using the same power you will certainly win most benchmarks. GPUs have a lot of "seriell" pipelines.
Agreed, once you both also condemn the opposite. High-end hardware buyers belittling low-end purchasers. It happens quite often here.
Chiphell is saying Geforce GTX 1080 comes with 8GB GDDR5X, 256-bit, 3x DisplayPort, 1x HDMI, 1x DVI-D.
Not to mention AMD itself describes the product as desktop mainstream, while leaks indicate NVIDIA is positioning GP104 as a more expensive GM200 replacement. Really doubt they would charge over $500 for the new VGAs if they weren't very confident about beating the competition.
So at the 317/333 mm2, what do you think are the chances of the core count to be more than 2560?
JHH told what the development of Pascal costed at the event when he announced GP100.
It sounds like Polaris 10's TDP is going to be 120ish, whereas the 1070 and 1080 will be similar to the 970 and 980 (145 and 165). It's not out of the question that a full Polaris 10 could have similar perf/w but is slower overall due to the lower clock speed.
It is out of question. A reference GTX980 uses 165W. A 390X - similar performance - is around >300W.
Dont forget that a GTX980 has two huge disadvantages: Less compute performance and less bandwidth. Both should be fixed with a >300mm^2 die over Polaris 10.