• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[bitsandchips]: Pascal to not have improved Async Compute over Maxwell

Actaeon

Diamond Member
How reliable is this source? Supports the recent rumors of Pascal being a shrunk Maxwell with compute capabilities.

http://www.bitsandchips.it/52-engli...scal-in-trouble-with-asyncronous-compute-code

According to our sources, next GPU micro architecture Pascal from NVIDIA will be in trouble if it will have to heavly use Asynchronous Compute code in video games.

Broadly speaking, Pascal will be an improved version of Maxwell, especially about FP64 performances, but not about Asyncronous Compute performances. NVIDIA will bet on raw power, instead of Asynchronous Compute abilities. This means that Pascal cards will be highly dependent on driver optimizations and games developers kindness. So, GamesWorks optimizations will play a fundamental role in company strategy. Is it for this reason that NVIDIA has made publicly available some GamesWorks codes?
 
Anyone wanna buy a 980ti and 27" gsync monitor? 8 months old, only one stuck pixel (green)
 
If that's the case then it means that Nvidia's GPUs from the same price range will have to be more powerful than AMD's GPUs.
 
Last edited:
Or NV is going to start going bankrupt trying to block devs from using Async Compute.

Things are going to get interesting! (Unless source is wrong, of course, then AC for everyone!)
 
Or NV is going to start going bankrupt trying to block devs from using Async Compute.

Things are going to get interesting! (Unless source is wrong, of course, then AC for everyone!)

Definitely the last part. Will need to get my popcorn ready. 🙂
 
I would have to think that the Pascal gpu was finalized a long time ago and theres a good chance there is no Async compute engines in it.

If this is true, we may see the fastest refresh of a gpu ever with added Async .

I'm holding my money till X-mas holidays.
 
The enable Async Compute process may require a bigger redesign of the architecture and Pascal don't look to be this. Gaming perf/w may be great, though.
 
If true, this is even more reason for AMD to be aggressive with Polaris price, and sponsorship of titles to use Async.
 
I hope that isn't the case. That'd be bad for everyone except maybe AMD, but I don't think the benefits of that would be worth it.
 
This has been my gut feeling for some time now. Maxwell is nothing more than a derivative of Pascal once 20nm didn't pan out. I don't think Nvidia anticipated that AMD would shift the entire API landscape so quickly and effectively. None of their hardware that was in the pipe, including Pascal, can support compute + graphics. Nvidia certainly can still counter with bigger chips. However, AMD is going to have better utilization of smaller silicon, which means their yields and thus margins are going to be better than Nvidia's. In terms of business, AMD has a better outlook.
 
Time will tell in the end I guess. It's one of those did they correctly predict the future gaming needs in the end during the design process.

Guess we'll find out in a couple of months how the cards unfold.
 
Never underestimate the amount of people that will buy nVidia no matter what.

If day one benchmarks show AMD winning, you can be pretty sure the number of people who only buy nV will drop. AMD should play their cards well for once, they won't get another chance like this to recover. Fottemberg has been usually right lately, I wouldn't dismiss this... Pascal being Maxwell+ was the rumor after all, it's no news, more like a little confirmation.


But yeah, it's gonna get quite interesting. I'll go make popcorn 😀
 
Never underestimate the amount of people that will buy nVidia no matter what.

Too true. AMD marketing really needs to hammer home a metric. Something like perf/mm2. Assuming this all plays out. It also hinges on if AMD can get devs to put in robust async implementations. In best case current scenarios AMD is seeing 30% over Nvidia. I doubt it's a linear correlation, but as a rough conceptual argument AMD's 232mm2 Polaris chip would have to be a 300+ mm2 chip from Nvidia. At least from the market share AMD does have, they will have good efficiency on the business side. They should be making money.
 
Perf/mm2 isn't going to sell anything. If they want to win its perf/watt.

Perf/watt might sell you GPUs but perf/sq mm is going to improve your margins as smaller dies yield better and the cost per GPU is significantly lower (due to smaller size and better yields). The problem of yields on these bleeding edge nodes is going to force rethinking of how to design future GPUs. As Raja Koduri stated the economics of smaller dies are much better and going forward GPU vendors are going to have to rethink their strategy as large dies of 500+ sq mm might become very difficult to yield at future nodes like 10nm/7nm. In fact as a GPU designer perf/sq mm is going to be even more important than perf/watt.
 
Back
Top