[Eurogamer] Deep Dive on the PS4 PRO GPU (Polaris + Vega features!)

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

zinfamous

No Lifer
Jul 12, 2006
110,515
29,100
146
This is all just peachy, but I really want to hear some legit info about Vega dGPU, including some percentage of real specs, ballpark costs, and release date. Last good info we heard was that we should be getting some news, at least, before the end of this month?
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
This is all just peachy, but I really want to hear some legit info about Vega dGPU, including some percentage of real specs, ballpark costs, and release date. Last good info we heard was that we should be getting some news, at least, before the end of this month?

Isn't Vega not coming out until 1Q next year now? Maybe they will demo it alongside Zen at CES.
 

zinfamous

No Lifer
Jul 12, 2006
110,515
29,100
146
Isn't Vega not coming out until 1Q next year now? Maybe they will demo it alongside Zen at CES.

Most likely. Latest info I heard--now I can't recall if this was purely off of that wccftech nonsense or if it was also found somewhere else--was that Vega 10 (big Vega) is planned for late November announce/end of year release, with small Vega coming Q1 or Q2. If that was only wccftech, then it's complete nonsense and no one should assume that is true. :D

I believe the latest info from AMD's mouth is still that "Vega" is targeted for 1H 2017...which tells me 2nd Qtr 2017 as their most certain release schedule. It could mean that big Vega will be earlier but that all Vega chips should be out in the market by 2nd qtr.

RS is probably correct that they are facing big supply shortages of HBM2, which is delaying ramp-up of stock, on top of not wanting to announce way to early and end up with a huge gap between specs/pricing and actual release.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
AMD has 2 options and neither of them is good on paper:

1) Launch Vega 10 ASAP for $650 and trounce 1080, but then NV releases 2070/2080 GP104 refreshes, GP102 and full 3840 CC Titan XP Black. This would steal AMD's thunder in a hurry. Then they would be forced to either drop prices and making the early adopters eats the losses. They can keep the prices and then end up getting outsold like crazy as we've seen with 980Ti vs. Fury X.

2) Wait until NV launches refreshes and GP102/Titan XP Black and then undercut them in every segment. The issue is the perception of inferior brand priced lower, and as we know most consumers will pay huge premiums for similar or slower NV hardware (aka $299 280X vs. $399-449 770 2-4GB, $399 290 vs. $499 780, $549 290X vs. $699 780Ti).

There are really only two ways Vega can disrupt the market. The first is if it flat out beats anything NV has in 2017. The second is if it massively undercuts 1080Ti/2080Ti GP102, where AMD offers Titan XP level of performance for $549 and 1080Ti suddenly looks stupid if NV launches it for $749-799.

I think AMD almost has to launch product first since it seems no matter what they do, NV always has a response.
 

Vaporizer

Member
Apr 4, 2015
137
30
66
I also do not think that a company like AMD can pull Zen release and Vega release in parallel. Therefore i assume Q1 2017 Zen and Q2 2017 Vega.
 

Dave2150

Senior member
Jan 20, 2015
639
178
116
AMD has 2 options and neither of them is good on paper:

1) Launch Vega 10 ASAP for $650 and trounce 1080, but then NV releases 2070/2080 GP104 refreshes, GP102 and full 3840 CC Titan XP Black. This would steal AMD's thunder in a hurry. Then they would be forced to either drop prices and making the early adopters eats the losses. They can keep the prices and then end up getting outsold like crazy as we've seen with 980Ti vs. Fury X.

2) Wait until NV launches refreshes and GP102/Titan XP Black and then undercut them in every segment. The issue is the perception of inferior brand priced lower, and as we know most consumers will pay huge premiums for similar or slower NV hardware (aka $299 280X vs. $399-449 770 2-4GB, $399 290 vs. $499 780, $549 290X vs. $699 780Ti).

There are really only two ways Vega can disrupt the market. The first is if it flat out beats anything NV has in 2017. The second is if it massively undercuts 1080Ti/2080Ti GP102, where AMD offers Titan XP level of performance for $549 and 1080Ti suddenly looks stupid if NV launches it for $749-799.

I think AMD almost has to launch product first since it seems no matter what they do, NV always has a response.

I think you're forgetting one important fact - many people (myself included) were put off from buying a FuryX due to the 4GB issue.

Vega should have no such issue, since it's rumoured to have 16GB VRAM. I believe it will be much more successful than the FuryX ever was.

The only issue I can see is if they can't launch it soon enough.
 

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
I like the idea of another player in the market actively pushing new high performance GPU hardware techniques.
Sadly it seems financially better to put your crappy software extensions into games and engines thereby reducing performance for everyone but much less for your own hardware compared to your competitors.
 
May 11, 2008
19,306
1,131
126
Sebbbi was lead console engine programmer at Ubisoft (one of their engines,don't know which one).He is now lead for Unigine next gen.He definitively knows his stuff.
Also,when laymen like us can understand complex stuff he is talking about,that is straightforward sign that he really knows what he is talking about.

I finally also understand a bit better what bottlenecks can occur with polaris 10 with all its execution resources.
 
May 11, 2008
19,306
1,131
126
Reading sebbbi's post over the years has taught me great deal on GPUs and how they work,too.

Yeah, now i can imagine a lot easier that when some game engine is written and that the rendering engine of that game engine uses shader algorithms that cause all these performance limiting issues, that it does not have to be a gcn design flaw. The game developer just never took the time (probably because there was no more time left or lack of experience)to profile their rendering techniques to run properly on a gcn architecture.

I am going to look for more interesting posts like sebbbi has posted.
 
May 11, 2008
19,306
1,131
126
It has taken me sometime to read document.Anyone interested in GCN should read it.It basically summarizes lots of Sebbbi's posts.

I still do not fully grasp what they mean on every page, but i get the idea a bit.

I was reading through all that material and ran into words that i could not recognize, so doing some online searching brings up a lot of stuff. Good reggae dub in the background for easy listening.
But yeah, i am traversing the beyond3d forums a lot these days. And it really is bliss to read how both current architectures from Nvidia and AMD are presented. Really nice posts about what the strong points are of Pascal and more plausible information about how Nvidia may use tiled rendering in the ROPs and can have serious advantages over the method used in the GCN architecture. I cannot confirm if it is all true or not, but some benchmarks seem to show that some games use these strong points and seems to add up to the article from David Kanter at RWT. But it least it is very interesting to read and to think about. It really is bliss to read posts from people who actually work in the field of developing rendering engines and explaining what they run into and how it can be solved while mentioning both architectures. No overly present fanboyism, just open honest discussions.
 

hrga225

Member
Jan 15, 2016
81
6
11
I still do not fully grasp what they mean on every page, but i get the idea a bit.

I was reading through all that material and ran into words that i could not recognize, so doing some online searching brings up a lot of stuff. Good reggae dub in the background for easy listening.
But yeah, i am traversing the beyond3d forums a lot these days. And it really is bliss to read how both current architectures from Nvidia and AMD are presented. Really nice posts about what the strong points are of Pascal and more plausible information about how Nvidia may use tiled rendering in the ROPs and can have serious advantages over the method used in the GCN architecture. I cannot confirm if it is all true or not, but some benchmarks seem to show that some games use these strong points and seems to add up to the article from David Kanter at RWT. But it least it is very interesting to read and to think about. It really is bliss to read posts from people who actually work in the field of developing rendering engines and explaining what they run into and how it can be solved while mentioning both architectures. No overly present fanboyism, just open honest discussions.
It is shame that Nvidia does not share arch details as AMD.There are guys at beyond3d who do a lot cuda programming. It would be so much fun having someone as Sebbbi explaining Nvidia arches and programming techniques,as he does for GCN.
 
May 11, 2008
19,306
1,131
126
It is shame that Nvidia does not share arch details as AMD.There are guys at beyond3d who do a lot cuda programming. It would be so much fun having someone as Sebbbi explaining Nvidia arches and programming techniques,as he does for GCN.

Yes, indeed. :)

I did do some reading and found a link from hardware.fr where it is suggested that Nvidia does not do tile based rendering as suggested by David Kanter .
But that Nvidia is extremely good at sorting the graphic data to keep everything in local cache on the chip as much as possible and because the data needs to be sorted to get max efficiency and utilization from the several GPC.

Original :
http://www.hardware.fr/news/14729/tile-rendering-maxwell-pascal.html

Translated to Engrish :
https://translate.googleusercontent...l.html&usg=ALkJrhgvRAsRK_7esvJrts_EmZe7b_fe3g


edit:

This is the thread :
https://forum.beyond3d.com/threads/tile-based-rasterization-in-nvidia-gpus.58296/page-2

It is a bit off topic for this thread, so i think i will keep it all in this post unless i find something interesting that may reveal that AMD might do something similar in Vega.

next edit :

The differences between Immediate mode rendering, tilebased rendering and tile based deferred rendering in a nutshell :

https://translate.googleusercontent...r.html&usg=ALkJrhj0G1AyR39Ju86HQNVZwghbKQ3ggA
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,661
4,420
136
I still do not fully grasp what they mean on every page, but i get the idea a bit.

I was reading through all that material and ran into words that i could not recognize, so doing some online searching brings up a lot of stuff. Good reggae dub in the background for easy listening.
But yeah, i am traversing the beyond3d forums a lot these days. And it really is bliss to read how both current architectures from Nvidia and AMD are presented. Really nice posts about what the strong points are of Pascal and more plausible information about how Nvidia may use tiled rendering in the ROPs and can have serious advantages over the method used in the GCN architecture. I cannot confirm if it is all true or not, but some benchmarks seem to show that some games use these strong points and seems to add up to the article from David Kanter at RWT. But it least it is very interesting to read and to think about. It really is bliss to read posts from people who actually work in the field of developing rendering engines and explaining what they run into and how it can be solved while mentioning both architectures. No overly present fanboyism, just open honest discussions.
Both architectures have their advantages. The biggest difference: Programming a game for Nvidia hardware is a piece of cake compared to GCN. Great job Nvidia did with simplifying the programming for the architecture.
 

Glo.

Diamond Member
Apr 25, 2015
5,661
4,420
136
edit:

This is the thread :
https://forum.beyond3d.com/threads/tile-based-rasterization-in-nvidia-gpus.58296/page-2

It is a bit off topic for this thread, so i think i will keep it all in this post unless i find something interesting that may reveal that AMD might do something similar in Vega.

next edit :

The differences between Immediate mode rendering, tilebased rendering and tile based deferred rendering in a nutshell :

https://translate.googleusercontent...r.html&usg=ALkJrhj0G1AyR39Ju86HQNVZwghbKQ3ggA
Actually the patent about variable waveform lenght may indicate we will see something like tile-based rasterization.

It can be one of uses of the patent itself. It would require however redesigning the caches and the the Shader Engine nature, and scheduling in the GPU. All of this would tell why it could be IPv9, rather than for example IPv8.3. But this is just my theorizing, about the design.
 
May 11, 2008
19,306
1,131
126
I do wonder, if AMD goes for the brute force method with HBM2, how much of a tile based rendering idea they will use. HBM2 has insane amount of bandwidth. That would kind of offset some overdraw disadvantages (memory access) they may have compared to the Nvidia Pascal/Maxwell method. And i read that tile based rendering becomes more calculation heavy as details increase : 4K.
 

Glo.

Diamond Member
Apr 25, 2015
5,661
4,420
136
IPv9 = vega right ? And IPv8.3 = polaris 10/11 ?
IPv9 = Vega.
IPv8.1 = Polaris.
Here more: http://videocardz.com/62250/amd-vega10-and-vega11-gpus-spotted-in-opencl-driver

Raven Ridge APU lineup is the same family of GPUs as Vega.
I do wonder, if AMD goes for the brute force method with HBM2, how much of a tile based rendering idea they will use. HBM2 has insane amount of bandwidth. That would kind of offset some overdraw disadvantages (memory access) they may have compared to the Nvidia Pascal/Maxwell method. And i read that tile based rendering becomes more calculation heavy as details increase : 4K.
For that they have Asynchronous Compute and loads of compute performance on their GPUs. And also, the possiblity of FP16 at 2 the rate of FP32.
 
May 11, 2008
19,306
1,131
126
IPv9 = Vega.
IPv8.1 = Polaris.
Here more: http://videocardz.com/62250/amd-vega10-and-vega11-gpus-spotted-in-opencl-driver

Raven Ridge APU lineup is the same family of GPUs as Vega.

For that they have Asynchronous Compute and loads of compute performance on their GPUs. And also, the possiblity of FP16 at 2 the rate of FP32.

When thinking of tile based rendering and sorting, i always think of a pure hardware solution, that is my flaw. But yes indeed, it can also be a partial hardware solution that benefits greatly from speed combined with a software solution that benefits greatly from the computation power of the gpu and the flexibility of using software. Also, doing the difficult and resolution depending (scalable) part in software would allow algorithms that can also give feedback back to the game engine, that could lead to new special effects. I can remember some game engines do some culling to reduce overdraw already.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Silly question - is the PS4 Pro using the same generation design as the PS4 or a Polaris chip??
 
May 11, 2008
19,306
1,131
126
Silly question - is the PS4 Pro using the same generation design as the PS4 or a Polaris chip??

If i am not mistaken, the PS4 pro will use a custom version that looks like a polaris but has some additional features that sony requested. These features are very likely not present in the current (PC) polaris versions.
 
  • Like
Reactions: USER8000

Glo.

Diamond Member
Apr 25, 2015
5,661
4,420
136
Silly question - is the PS4 Pro using the same generation design as the PS4 or a Polaris chip??
Polaris is Binary compatible with previous generations of GCN. Vega might not be, and would require a abstraction layer(drivers) to make it compatible with previous generation of GPUs. This is all about gaming.
When thinking of tile based rendering and sorting, i always think of a pure hardware solution, that is my flaw. But yes indeed, it can also be a partial hardware solution that benefits greatly from speed combined with a software solution that benefits greatly from the computation power of the gpu and the flexibility of using software. Also, doing the difficult and resolution depending (scalable) part in software would allow algorithms that can also give feedback back to the game engine, that could lead to new special effects. I can remember some game engines do some culling to reduce overdraw already.

The problem here comes with the nature of APIs. DX12, and other low-level APIs do not talk directly with Drivers. Between the GPU and the game there is mostly only API. There is not that much room for improvement from drivers, however that may sound ridiculous when you look at performance improvements with each driver releases.

What this means is that software solution for hardware capabilities would be like adding static scheduling to architecture that is supposed to manage itself. There has to be hardware support, and hardware must be capable of Immediate mode on Tile-Based Rasterization.
 
  • Like
Reactions: USER8000
May 11, 2008
19,306
1,131
126
Polaris is Binary compatible with previous generations of GCN. Vega might not be, and would require a abstraction layer(drivers) to make it compatible with previous generation of GPUs. This is all about gaming.


The problem here comes with the nature of APIs. DX12, and other low-level APIs do not talk directly with Drivers. Between the GPU and the game there is mostly only API. There is not that much room for improvement from drivers, however that may sound ridiculous when you look at performance improvements with each driver releases.

What this means is that software solution for hardware capabilities would be like adding static scheduling to architecture that is supposed to manage itself. There has to be hardware support, and hardware must be capable of Immediate mode on Tile-Based Rasterization.

Well, i mean with software solutions the shader programs that are executed by the rendering engine on the gpu. Instead of having a pure hardware solution that is fixed and causes gpu generation limitations, part is in hardware to be power efficient and fast and part would be run on the programmable shader part of the gpu, taking leverage of the flexibility.
I would not be surprised if Nvidia already has such an approach.
And with low level api alike dx12 and vulcan, i think we will see more of this with all gpu vendors.
Now the game engine can have more or less direct contact with the gpu with less driver overhead. Thus i kind of expect in the near future some very sophisticated and novel approaches from game engine developers as well.

As a side note :
I think a good idea for me is to read up on how powervr had long ago difficulties with implementing the hardware T&L when the kyro II was out. It was way faster but then Nvidia and ATi came out with hardware T&L and the kyro II lost its ground. Now of course this is no longer an issue but it sure is interesting to do some catching up.