• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

videocardzAMD’s official GPU Roadmap for 2016-2018

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
If the leaked die sizes are correct,the GP104 is 350MM2 and Polaris 10 is 232MM2. Even if we assume that is closer to 250MM2 on the same process as the GP104,I really think Polaris 10 would have to be revolutionary to catch a GTX1070 or even a GTX1080!! Having a 40% larger die in favour of Nvidia,is a big advantage.

I personally think more around R9 390X speed,and maybe doing better in DX12 games going past a Fury,however with a GTX960 class TDP.
 
If the leaked die sizes are correct,the GP104 is 350MM2 and Polaris 10 is 232MM2. Even if we assume that is closer to 250MM2 on the same process as the GP104,I really think Polaris 10 would have to be revolutionary to catch a GTX1070 or even a GTX1080!! Having a 40% larger die in favour of Nvidia,is a big advantage.

I personally think more around R9 390X speed,and maybe doing better in DX12 games going past a Fury,however with a GTX960 class TDP.
and you tie the die process along with perfomance on what logic?
 
If the leaked die sizes are correct,the GP104 is 350MM2 and Polaris 10 is 232MM2. Even if we assume that is closer to 250MM2 on the same process as the GP104,I really think Polaris 10 would have to be revolutionary to catch a GTX1070 or even a GTX1080!! Having a 40% larger die in favour of Nvidia,is a big advantage.

I personally think more around R9 390X speed,and maybe doing better in DX12 games going past a Fury,however with a GTX960 class TDP.

my estimate for GP104 is 316mm^2 based on 37.5*37.5 bga packages next to it. That is to the nearest hundredth pixel.
 
and you tie the die process along with perfomance on what logic?

So explain to me how AMD with a much smaller chip this generation is matching or beating a significantly larger Nvidia chip this generation but trying to have decent efficiency??

The last time we had anything like that was Hawaii against a fully enabled GK110 and AMD had worse efficiency.

People are going to hype this up like Fiji and then end up dissapointed due to their own unreasonable expectations.

Even AMD said they wanted to "lower the cost" of a VR capable graphics card,NOT be the fastest one.

That clearly indicates a certain level of minimum performance.This is why plenty of people think R9 390 to Fury level performance on average.

But each to their own and all that milarky. After all it is a rumours and speculation thread,and it does not mean any of it is right until the launch happens! I remember nobody predicted the HD4870 GDDR5 correctly and it was kind of a shock when launched,so I am happy to be wrong! 🙂


my estimate for GP104 is 316mm^2 based on 37.5*37.5 bga packages next to it. That is to the nearest hundredth pixel.

My bad,supposedly 332MM2 here:

http://wccftech.com/nvidia-pascal-gp104-gpu-pictured-leaked/

But even at 316MM2,that makes the GP104 around 25% bigger if process nodes were equalised.
 
Last edited:
what you mean? you came from the future or something? why you used "IS" have you seen any benchmark from pascal or polaris?
we dont know anything about them
the only thing we know is that 14nm compared to 16 is much more effiecient(almost x1.9 and thats all
 
what you mean? you came from the future or something? why you used "IS" have you seen any benchmark from pascal or polaris?
we dont know anything about them
the only thing we know is that 14nm compared to 16 is much more effiecient(almost x1.9 and thats all

AMD already said what level of performance they are targetting - you might want to ask them.

Plus,all of these threads are based on speculation and rumours - this might be your first rodeo in one,so if you get annoyed with people speculating best not to post in one,and wait until launch. Nobody got the HD4000 series right in 2008,it can happen.

Edit to post.

This pretty much sums up speculation and rumour threads in general when it comes to electronics:

https://www.youtube.com/watch?v=6lHgbbM9pu4

😛
 
Last edited:
my estimate for GP104 is 316mm^2 based on 37.5*37.5 bga packages next to it. That is to the nearest hundredth pixel.

What other BGA packages, the GM200 ones?

I got ~330-340mm² as well, comparing it to the 12mmx14mm GDDR5 modules. That picture's pretty fuzzy though.
 
Last edited:
What other BGA packages, the GM200 ones?

I got ~330-340mm² as well, comparing it to the 12mmx14mm GDDR5 modules.

Excuse me, meant 37.5 x 37.5 BGA package on which it resides:

get
 
AMD already said what level of performance they are targetting - you might want to ask them.

Plus,all of these threads are based on speculation and rumours - this might be your first rodeo in one,so if you get annoyed with people speculating best not to post in one,and wait until launch. Nobody got the HD4000 series right in 2008,it can happen.

Edit to post.

This pretty much sums up speculation and rumour threads in general when it comes to electronics:

https://www.youtube.com/watch?v=6lHgbbM9pu4

😛
ill ask again even on this video
you saw any benchmark of any card vs pascal ?
 
I was thinking more about PIM and what would be the problems it if would happen in HBM memory.

https://en.wikipedia.org/wiki/Computational_RAM

starting with a system with a separate CPU chip and DRAM chip(s), add small amounts of "coprocessor" computational ability to the DRAM, working within the limits of the DRAM process and adding only small amounts of area to the DRAM, to do things that would otherwise be slowed down by the narrow bottleneck between CPU and DRAM: zero-fill selected areas of memory, copy large blocks of data from one location to another, find where (if anywhere) a given byte occurs in some block of data, etc. The resulting system—the unchanged CPU chip, and "smart DRAM" chip(s) -- is at least as fast as the original system, and potentially slightly lower in cost. The cost of the small amount of extra area is expected to be more than paid back in savings in expensive test time, since there is now enough computational capability on a "smart DRAM" for a wafer full of DRAM to do most testing internally in parallel, rather than the traditional approach of fully testing one DRAM chip at a time with an expensive external automatic test equipment.[1]

http://www.anandtech.com/show/9969/jedec-publishes-hbm2-specification

I wonder how it will work with hypothetical future versions of HBM memory when data needs to be copied from one memory region to another because it is needed multiple times and the pointer passing /zero copying method is not used. With HBM, there are stacks of memory, but the 128 bit databuses are not connected with each other directly with current versions of HBM.
I wonder if hypothetical future PIM based HBM memory stacks would have their databuses internally connected through a multiplexer to be able to do logic operations on their own. Then only from stack to stack would the gpu need to intervene.

It would mean logic operations would be fast in local regions of memory. But when physical memory boundaries are reached, the gpu would have to copy it from one region to another and the memory could then again do some logic operations on its own. This would not be an issue, i am thinking of all the prefetching techniques that exist.

It will be interesting times.
 
Last edited:
ill ask again even on this video
you saw any benchmark of any card vs pascal ?

Look at all the threads on pascal and Polaris in the last 6 month's, AMD statements on market positioning and were Nvidia is rumoured to price their new cards. Ofc,from my experience of people like you in the last 10 years you will overhype a product and once it does not reach the performance you hyped it to,there is eternal whining at ATI and AMD.

Remember,Fury? Some of the biggest haters were those who hyped it excessively.

Plus where are the videos which disprove what I am saying?

If you can't then you will need to disagree with me and what loads of people on the internet are thinking.

If you have insider information then please share it?

Nobody seems to think this will beat a GTX1080. But if you have information to show otherwise please show us!

If it can it will be awesome. It will be the biggest desruption to the market since the. HD4870.

Even zlatan has said will only beat Fury in certain situations. He has been accurate with what he has said generally.
 
Last edited:
Look at all the threads on pascal and Polaris in the last 6 month's, AMD statements on market positioning and were Nvidia is rumoured to price their new cards. Ofc,from my experience of people like you in the last 10 years you will overhype a product and once it does not reach the performance you hyped it to,there is eternal whining at ATI and AMD.

Remember,Fury? Some of the biggest haters were those who hyped it excessively.

Plus where are the videos which disprove what I am saying?

If you can't then you will need to disagree with me and what loads of people on the internet are thinking.

If you have insider information then please share it?

Nobody seems to think this will beat a GTX1080. But if you have information to show otherwise please show us!

If it can it will be awesome. It will be the biggest desruption to the market since the. HD4870.

Even zlatan has said will only beat Fury in certain situations. He has been accurate with what he has said generally.

this is your post

So explain to me how AMD with a much smaller chip this generation is matching or beating a significantly larger Nvidia chip this generation but trying to have decent efficiency??

The last time we had anything like that was Hawaii against a fully enabled GK110 and AMD had worse efficiency.

People are going to hype this up like Fiji and then end up dissapointed due to their own unreasonable expectations.

Even AMD said they wanted to "lower the cost" of a VR capable graphics card,NOT be the fastest one.

That clearly indicates a certain level of minimum performance.This is why plenty of people think R9 390 to Fury level performance on average.

But each to their own and all that milarky. After all it is a rumours and speculation thread,and it does not mean any of it is right until the launch happens! I remember nobody predicted the HD4870 GDDR5 correctly and it was kind of a shock when launched,so I am happy to be wrong! 🙂




My bad,supposedly 332MM2 here:

http://wccftech.com/nvidia-pascal-gp104-gpu-pictured-leaked/

But even at 316MM2,that makes the GP104 around 25% bigger if process nodes were equalised.
and i asked you three times already
have you seen any benchmark already from any side to conclude that polaris is faster than nvidia
or because polaris is smaller it will be faster?
tbh the only thing for sure we can say about polaris is that glofo and tsmc is saying that 14nm is 2.5x more eff from 28nm and 1.9x from 16nm this is the only thing we know FOR sure because we have seen in on the mobile SoC
the process of die shrinking alone does that and we cant include the various updates on the uarch (only the power gating that we know for sure now) simply because we dont know anything till the white paper comes out
 
Even zlatan has said will only beat Fury in certain situations. He has been accurate with what he has said generally.

He said faster than any GPU out there, under some situations.

I would expect under situations with a lot of overdraw, scenes with lots of objects that are visually hidden. Polaris would go into cheat mode and discard everything not visible from being in the rendering pipeline.

It's average performance should be lower, my guess is 390X+ but it's minimum FPS should be very good.
 
If the leaked die sizes are correct,the GP104 is 350MM2 and Polaris 10 is 232MM2. Even if we assume that is closer to 250MM2 on the same process as the GP104,I really think Polaris 10 would have to be revolutionary to catch a GTX1070 or even a GTX1080!! Having a 40% larger die in favour of Nvidia,is a big advantage.

I personally think more around R9 390X speed,and maybe doing better in DX12 games going past a Fury,however with a GTX960 class TDP.


From latest pics it seams that GP104 is closer to 310-320mm2
 
I would expect under situations with a lot of overdraw, scenes with lots of objects that are visually hidden. Polaris would go into cheat mode and discard everything not visible from being in the rendering pipeline.

Why would you consider it cheat mode? When you walk down a street what do you see? Do you visualize the interiors of the buildings? Is there some kind of drawback in the end?
 
Why would you consider it cheat mode? When you walk down a street what do you see? Do you visualize the interiors of the buildings? Is there some kind of drawback in the end?

I mean, it's a mind blowing concept. What if your hardware can ignore a huge portion of draw calls that it never has to work on? Why the heck didn't anyone invent this hardware feature before?!

It's basically cheating, you could call it "working smarter, not harder" too.
 
I mean, it's a mind blowing concept. What if your hardware can ignore a huge portion of draw calls that it never has to work on? Why the heck didn't anyone invent this hardware feature before?!

It's basically cheating, you could call it "working smarter, not harder" too.
You have laid out the kernal for some interesting threads.

Here we have it folks. The next great conflict if Polaris gives GTX1080 a run. AMD is cheating, pros and cons. I can see the battle lines forming.

By the way, I fully agree that this is = working very intelligently.
 
I mean, it's a mind blowing concept. What if your hardware can ignore a huge portion of draw calls that it never has to work on? Why the heck didn't anyone invent this hardware feature before?!

It's basically cheating, you could call it "working smarter, not harder" too.

So Tiled Based Renderer are cheating?! It explains the performance of Apple SoCs. :thumbsdown:
 
Last edited:
So Tiled Based Renderer are cheating?! It explains the performance of Apple SoCs. :thumbsdown:
Its a good thing I don't gamble heavily.

I was certain you would be one to accuse AMD of cheating if they are challenging Nvidia.
 
Na, it just shows that he has no clue. He should go back to the Kyro cards. They are 15 years old and uses this "cheat" mode.
 
Na, it just shows that he has no clue. He should go back to the Kyro cards. They are 15 years old and uses this "cheat" mode.
and you have? D: zlatan was talking about hardware culling specificly so its not as simple as "tile rendering" its not powervr era now its much more advanced
 
Back
Top