This is all just peachy, but I really want to hear some legit info about Vega dGPU, including some percentage of real specs, ballpark costs, and release date. Last good info we heard was that we should be getting some news, at least, before the end of this month?
Isn't Vega not coming out until 1Q next year now? Maybe they will demo it alongside Zen at CES.
AMD has 2 options and neither of them is good on paper:
1) Launch Vega 10 ASAP for $650 and trounce 1080, but then NV releases 2070/2080 GP104 refreshes, GP102 and full 3840 CC Titan XP Black. This would steal AMD's thunder in a hurry. Then they would be forced to either drop prices and making the early adopters eats the losses. They can keep the prices and then end up getting outsold like crazy as we've seen with 980Ti vs. Fury X.
2) Wait until NV launches refreshes and GP102/Titan XP Black and then undercut them in every segment. The issue is the perception of inferior brand priced lower, and as we know most consumers will pay huge premiums for similar or slower NV hardware (aka $299 280X vs. $399-449 770 2-4GB, $399 290 vs. $499 780, $549 290X vs. $699 780Ti).
There are really only two ways Vega can disrupt the market. The first is if it flat out beats anything NV has in 2017. The second is if it massively undercuts 1080Ti/2080Ti GP102, where AMD offers Titan XP level of performance for $549 and 1080Ti suddenly looks stupid if NV launches it for $749-799.
I think AMD almost has to launch product first since it seems no matter what they do, NV always has a response.
Sadly it seems financially better to put your crappy software extensions into games and engines thereby reducing performance for everyone but much less for your own hardware compared to your competitors.I like the idea of another player in the market actively pushing new high performance GPU hardware techniques.
Sebbbi was lead console engine programmer at Ubisoft (one of their engines,don't know which one).He is now lead for Unigine next gen.He definitively knows his stuff.
Also,when laymen like us can understand complex stuff he is talking about,that is straightforward sign that he really knows what he is talking about.
Reading sebbbi's post over the years has taught me great deal on GPUs and how they work,too.I finally also understand a bit better what bottlenecks can occur with polaris 10 with all its execution resources.
Reading sebbbi's post over the years has taught me great deal on GPUs and how they work,too.
It has taken me sometime to read document.Anyone interested in GCN should read it.It basically summarizes lots of Sebbbi's posts.Found this ppt (ppsx) where all the graphical pictures are from.
http://amd-dev.wpengine.netdna-cdn....012/10/GCN-Performance-FTW-Stephan-Hodes.ppsx
😕
It has taken me sometime to read document.Anyone interested in GCN should read it.It basically summarizes lots of Sebbbi's posts.
It is shame that Nvidia does not share arch details as AMD.There are guys at beyond3d who do a lot cuda programming. It would be so much fun having someone as Sebbbi explaining Nvidia arches and programming techniques,as he does for GCN.I still do not fully grasp what they mean on every page, but i get the idea a bit.
I was reading through all that material and ran into words that i could not recognize, so doing some online searching brings up a lot of stuff. Good reggae dub in the background for easy listening.
But yeah, i am traversing the beyond3d forums a lot these days. And it really is bliss to read how both current architectures from Nvidia and AMD are presented. Really nice posts about what the strong points are of Pascal and more plausible information about how Nvidia may use tiled rendering in the ROPs and can have serious advantages over the method used in the GCN architecture. I cannot confirm if it is all true or not, but some benchmarks seem to show that some games use these strong points and seems to add up to the article from David Kanter at RWT. But it least it is very interesting to read and to think about. It really is bliss to read posts from people who actually work in the field of developing rendering engines and explaining what they run into and how it can be solved while mentioning both architectures. No overly present fanboyism, just open honest discussions.
It is shame that Nvidia does not share arch details as AMD.There are guys at beyond3d who do a lot cuda programming. It would be so much fun having someone as Sebbbi explaining Nvidia arches and programming techniques,as he does for GCN.
Both architectures have their advantages. The biggest difference: Programming a game for Nvidia hardware is a piece of cake compared to GCN. Great job Nvidia did with simplifying the programming for the architecture.I still do not fully grasp what they mean on every page, but i get the idea a bit.
I was reading through all that material and ran into words that i could not recognize, so doing some online searching brings up a lot of stuff. Good reggae dub in the background for easy listening.
But yeah, i am traversing the beyond3d forums a lot these days. And it really is bliss to read how both current architectures from Nvidia and AMD are presented. Really nice posts about what the strong points are of Pascal and more plausible information about how Nvidia may use tiled rendering in the ROPs and can have serious advantages over the method used in the GCN architecture. I cannot confirm if it is all true or not, but some benchmarks seem to show that some games use these strong points and seems to add up to the article from David Kanter at RWT. But it least it is very interesting to read and to think about. It really is bliss to read posts from people who actually work in the field of developing rendering engines and explaining what they run into and how it can be solved while mentioning both architectures. No overly present fanboyism, just open honest discussions.
Actually the patent about variable waveform lenght may indicate we will see something like tile-based rasterization.edit:
This is the thread :
https://forum.beyond3d.com/threads/tile-based-rasterization-in-nvidia-gpus.58296/page-2
It is a bit off topic for this thread, so i think i will keep it all in this post unless i find something interesting that may reveal that AMD might do something similar in Vega.
next edit :
The differences between Immediate mode rendering, tilebased rendering and tile based deferred rendering in a nutshell :
https://translate.googleusercontent...r.html&usg=ALkJrhj0G1AyR39Ju86HQNVZwghbKQ3ggA
IPv9 = Vega.IPv9 = vega right ? And IPv8.3 = polaris 10/11 ?
For that they have Asynchronous Compute and loads of compute performance on their GPUs. And also, the possiblity of FP16 at 2 the rate of FP32.I do wonder, if AMD goes for the brute force method with HBM2, how much of a tile based rendering idea they will use. HBM2 has insane amount of bandwidth. That would kind of offset some overdraw disadvantages (memory access) they may have compared to the Nvidia Pascal/Maxwell method. And i read that tile based rendering becomes more calculation heavy as details increase : 4K.
IPv9 = Vega.
IPv8.1 = Polaris.
Here more: http://videocardz.com/62250/amd-vega10-and-vega11-gpus-spotted-in-opencl-driver
Raven Ridge APU lineup is the same family of GPUs as Vega.
For that they have Asynchronous Compute and loads of compute performance on their GPUs. And also, the possiblity of FP16 at 2 the rate of FP32.
Silly question - is the PS4 Pro using the same generation design as the PS4 or a Polaris chip??
Polaris is Binary compatible with previous generations of GCN. Vega might not be, and would require a abstraction layer(drivers) to make it compatible with previous generation of GPUs. This is all about gaming.Silly question - is the PS4 Pro using the same generation design as the PS4 or a Polaris chip??
When thinking of tile based rendering and sorting, i always think of a pure hardware solution, that is my flaw. But yes indeed, it can also be a partial hardware solution that benefits greatly from speed combined with a software solution that benefits greatly from the computation power of the gpu and the flexibility of using software. Also, doing the difficult and resolution depending (scalable) part in software would allow algorithms that can also give feedback back to the game engine, that could lead to new special effects. I can remember some game engines do some culling to reduce overdraw already.
Polaris is Binary compatible with previous generations of GCN. Vega might not be, and would require a abstraction layer(drivers) to make it compatible with previous generation of GPUs. This is all about gaming.
The problem here comes with the nature of APIs. DX12, and other low-level APIs do not talk directly with Drivers. Between the GPU and the game there is mostly only API. There is not that much room for improvement from drivers, however that may sound ridiculous when you look at performance improvements with each driver releases.
What this means is that software solution for hardware capabilities would be like adding static scheduling to architecture that is supposed to manage itself. There has to be hardware support, and hardware must be capable of Immediate mode on Tile-Based Rasterization.