The future of AMD in graphics

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tajoh111

Senior member
Mar 28, 2005
346
388
136
Seeing that the graphics in consoles are lower level coded, do you really thing it'll be easy to replace GCN optimizations? Sounds almost as hard as Nvidia using ARM to break into the market for console APUs. X86 and GCN have strong persistence due to their long established presence. Not saying it can't happen, just that the barriers are very high and once AMD does not get greedy, it should remain in use for quite a while.

With a guy like Raja who likely knows the ins and out of GCN, on top Jim Keller and an unlimited budget. Probably. It won't be easy but they can do it. That's irrelevant though.

Since this is a new console, it will use new software and they don't need to use GCN as a base. Consoles run so well with weaker resources because their is direct communication between the games and hardware without the bottlenecks of a high level API to eat resources. As a result, it is in their best interest to start something new where the games are coded specifically for Intel's new hardware. This has a secondary benefit, new hardware introduces fragmentation, so that their advantages don't carry over to the competitors.

Low level API coding is very specialized which also means its the easy to start fresh, particularly with a consoles.

If you were microsoft and Intel offered your 2 billion dollars and were willing to sell the CPU/GPU at cost how attractive do you think this would be? Not only could you turn this 2 billion dollars into a 100 dollar subsidy to price your console below the competition which could be used to win the console war(first 20 million units atleast). Intels cost is truly cost because Intel doesn't have to pay for something AMD/Nvidia have to pay for. That is the margin of TSMC and Samsung. This can be tremendous advantage. TSMC or Samsung a margin are 50%. That is chips on TSMC and samsung come with a margin of 50% and since Intel is selling the chips directly, they can offer lower pricing and still make a profit. Lets look at the cost of the PS4 for example and the APU in it.

https://www.gurufocus.com/term/grossmargin/TSM/Gross%2BMargin/Taiwan+Semiconductor+Manufacturing+Co+Ltd

AMD buys a chip from TSMC/global foundaries for 87 dollars(this was around the cost for AMD) and sells it Sony for 100(this is the price and selling cost of the PS4 APU initially via AMD).

https://www.engadget.com/2013/11/19/ps4-costs-381-to-make-according-to-hardware-teardown/
https://www.geek.com/games/each-ps4-sale-makes-more-profit-for-amd-than-sony-but-how-much-1577855/

TSMC makes a 50% gross margin which means the chip cost them 43.50 to produce. So what does this illustrate? How much lower Intel can price their chips for AMD. That $43.50 is not part of TSMC profit, it's part of Intels. So where AMD had a range of $87-100 to make a profit, Intel can get down to $43.50 which is why they can price so much lower.
 
Last edited:
  • Like
Reactions: ozzy702

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
Consoles run so well with weaker resources because their is direct communication between the games and hardware without the bottlenecks of a high level API to eat resources
What's GNMX?
Also consoles use weaker h/w because more powerful one is significantly more expensive and harder to cool.

Unless you want another PS3, and you don't, do you?
 
  • Like
Reactions: DarthKyrie

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Low level API coding is very specialized which also means its the easy to start fresh, particularly with a consoles.

What? Are you reading what you wrote? Most console devs are used to coding to CGN quirks and features. Starting fresh with a new GPU would consume a lot of resources and time.
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
No, they aren't. We know for a fact thats not the case with Anandtech, they specifically say they don't retest cards. Out of all the ones you posted, there may be some that have the resources to test 25 cards every time a new card comes out, but that is certainly not the norm.
HardwareUnboxed test everything from scratch.
I know for a fact that most of these link retest their GPU lineup. Maybe Guru3D doesn't often. But the rest does. Including Anand.
 

jpiniero

Lifer
Oct 1, 2010
16,829
7,279
136
AMD already has X2, PS5, MS streaming box, what else?

The Switch of course; which was the best selling console in the US in 2018. Now I would expect Nintendo to stay on ARM for a follow up console but it also wouldn't surprise me if they dumped nVidia.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Selling these chips at cost would be far cheaper then some of Intels other contra revenue schemes. Lets not forget the big one where Intel paid dells up to a billion dollars a year to not use AMD chips.

And they stopped it, and fired the CEO that pursued that strategy. Because it was a stupid strategy. Okay, he wasn't fired due to solely that reason, but a part of it, that he was making short-sighted decisions.

The one time they were in consoles was with the first Xbox, where they took a Pentium III chip and cut the cache in half. It wasn't a Celeron, because it kept the associativity but it had half the cache just like Celeron chips. They gave Microsoft a cut-down Pentium III to work with. I think that tells you something. That its a low margin, low revenue business.

The projects Intel gives up are due to not working with their business plan of selling a high revenue and high margin product. They will try for a bit, but eventually let go.
 

JasonLD

Senior member
Aug 22, 2017
488
447
136
What? Are you reading what you wrote? Most console devs are used to coding to CGN quirks and features. Starting fresh with a new GPU would consume a lot of resources and time.

MS is providing low level API so even if MS switches GPU vendor on next Xbox, it shouldn't be much of a problem for developers given their track record. It is more about Intel's ability to provide something that is as good as what AMD could provide next year for consoles and that part is extremely doubtful.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
It's not even about their SIMD ISA.
AMD is nice and fluffy to deal with, and their semi-custom biz proved to be a well-oiled machine, which makes the likes of Sony 300% happier.
That's just rubbish - no one in business cares about "fluffyness". Nor is AMD "well oiled", their support, particularly for software is sub-par. AMD have the console wins because they have the right hardware and are cheap - no one else was willing to sell their kit for such low margins. That matters more to Sony/MS then anything else.
 

SpaceBeer

Senior member
Apr 2, 2016
307
100
116
... no one else was willing to sell their kit for such low margins. That matters more to Sony/MS then anything else.

You mean no one else is able to produce such a small and yet powerful enough chips that will give users the same experience as 2x more expensive PC? :)

Lisa Su mentioned they are going to invest much more in software than they did last few years. With the current state of CUDA (plus all the other green stuff) and Intel's efforts with One API and traditionally good driver support for GPUs, it is probably the best thing AMD could do.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
HardwareUnboxed test everything from scratch.
I know for a fact that most of these link retest their GPU lineup. Maybe Guru3D doesn't often. But the rest does. Including Anand.

Dude, I don't know why you insist on coming into AMD threads to spout off misinformation, but its getting old.

Ryan has specifically said they don't retest GPU's. AnandTech has their "Bench" system where they have a suite of games, all with specific settings. When a new card comes out, they test that card using those games and settings. When they come out with a new bench (Typically every 1-2 years) then they will retest cards using that next iteration of bench. Now they have in the past gone out to test for driver changes using different drivers, but those were one off tests.

The same goes for most review sites. Because it is a crap load of work to retest a ton of GPUs on a ton of games.
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
Ryan has specifically said they don't retest GPU's. AnandTech has their "Bench" system where they have a suite of games, all with specific settings. When a new card comes out, they test that card using those games and settings. When they come out with a new bench (Typically every 1-2 years) then they will retest cards using that next iteration of bench. Now they have in the past gone out to test for driver changes using different drivers, but those were one off tests.
That's a blatant lie and misinformation, and the moderators here can verify it, Anand retests every GPU they feature in a review, same with CPUs.
Dude, I don't know why you insist on coming into AMD threads to spout off misinformation, but its getting old.
What misinformation? I came here and posted a dozen verified updated tests proving the 1080Ti wipes the floor with the Vega 64.
The one posting crap are those who claimed otherwise without any sort of evidence.
 

NTMBK

Lifer
Nov 14, 2011
10,450
5,834
136
Like hell Intel could be trusted to crank out enough mass market chips to launch a next gen console. Imagine if Nintendo had signed up for an Intel 10nm part for their Switch. 10nm was meant to be ready in 2016, but they would STILL be waiting for 10nm to be ready. Would you bet your console business on Intel's fabs right now?
 

Guru

Senior member
May 5, 2017
830
361
106
Old Video, using Vega 64 LC, which is also overclocked vs a standard 1080Ti, that's a pathetic comparison grasping at straws.
Now here are these games with latest patches and drivers, Here are the real comparisons, the 1080Ti wipes the floor with Vega 64, the 1080 is on par or slightly behind.


Wolfenstein_II_average_fps.png


https://techreport.com/review/34105/nvidia-geforce-rtx-2080-ti-graphics-card-reviewed/11

RTX2080-REVIEW-54.jpg

https://www.hardwarecanucks.com/for...a-geforce-rtx-2080-ti-rtx-2080-review-17.html

Wolf2_1.png


https://www.pcper.com/reviews/Graph...2080-Ti-Review/Game-Testing-Far-Cry-5-Wolfens

6jvkmtff6en11.png


index.php


https://www.guru3d.com/articles-pages/geforce-rtx-2080-ti-founders-review,21.html


wolfenstein-2_3840-2160.png


100904.png


https://www.anandtech.com/show/1334...tx-2080-ti-and-2080-founders-edition-review/9

And here is DOOM:
index.php

https://www.guru3d.com/articles-pages/msi-geforce-gtx-1080-ti-gaming-x-trio-review,19.html

NVIDIA even managed to beat AMD in their favorite benchmark (Ashes) by a good margin, how the time has changed!

ashes_0.png


https://www.pcper.com/reviews/Graph...2080-Ti-Review/Game-Testing-Far-Cry-5-Wolfens

Those are all OLD tests of those cards. That is why its always better to compare card vs card when looking at a single game data, as the windows version, drivers and game version are the same across the both cards as they are benched at the same day.
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
Those are all OLD tests of those cards. That is why its always better to compare card vs card when looking at a single game data, as the windows version, drivers and game version are the same across the both cards as they are benched at the same day.
LOL, Again none of these tests are old, they are as new as they come. All tested with latest game patches and drivers. YOUR video is at least 15months old!
 

Guru

Senior member
May 5, 2017
830
361
106
LOL, Again none of these tests are old, they are as new as they come. All tested with latest game patches and drivers. YOUR video is at least 15months old!
The performance of the cards they use are from their old data, the last time they tested them.

Very few actually retest every single card in their test bench everytime a new card comes out. They reuse the same data over and over.

Later patches actually increased Vega lead over the 1080ti in Wofl 2 as they brought support for rapid packed match, fp16, etc...

Vega 64 was basically just 5-6% faster than a1080 in Wolf 2 for the first few months, subsequent patches and drivers made it much faster.

My larger point though is that Vega blew the water with Pascal competition in DX12 and Vulkan games and GCN 4 blew the lid with Pascal in DX12 and Vulkan. This is why the RX 580 beats the 1060 in 90% of DX12 and Vulkan games. Right now 1060 is literally 20+fps on average slower than the 580 in Wolf 2 at 1080p.
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
Vega 64 was basically just 5-6% faster than a1080 in Wolf 2 for the first few months, subsequent patches and drivers made it much faster.
Again your point is FUBAR, the 1080Ti was retesed along the RTX 2080 because it was a DIRECT competitor to it, So these 1080Ti numbers are as new as they can get. So are the Vega 64 numbers. The 1080Ti handidly beat down Vega64 even though it doesn't have any RPM or primitive shaders.

This is why the RX 580 beats the 1060 in 90% of DX12 and Vulkan games. Right now 1060 is literally 20+fps on average slower than the 580 in Wolf 2 at 1080p.
Those games literally comprise less than 0.5% of all the games released in the past 5 years. Most games are DX11, and most games offer good DX11 support. Most DX12 games have worse performance than DX11
 
  • Like
Reactions: Innokentij

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
If AMD wants to make money from dGPUs they have to target the $250 to $500 Gaming segment. That means GTX1660 to RTX 2070, which means TU116 (284mm2 no RTX) and TU106 (445mm2 with RTX).

So with a 200-230mm2 die at 7nm + 8GB HBM, I believe they can cover that segment range using a single die.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
If AMD wants to make money from dGPUs they have to target the $250 to $500 Gaming segment. That means GTX1660 to RTX 2070, which means TU116 (284mm2 no RTX) and TU106 (445mm2 with RTX).

So with a 200-230mm2 die at 7nm + 8GB HBM, I believe they can cover that segment range using a single die.
They don't need HBM; Navi family isn't a single die either.
 
  • Like
Reactions: DarthKyrie

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
They don't need HBM; Navi family isn't a single die either.

I didnt say Navi is a single die family, i said that a 200-230mm2 die could be able to cover $250 to $500 dGPU market.

Also, if they can sell Vega 56 (495mm2 + 8GB HBM) at $280 (according to latest news) , then definitely they can have a $250-500 Navi card with 8GB HBM
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
I didnt say Navi is a single die family, i said that a 200-230mm2 die could be able to cover $250 to $500 dGPU market.
It can't, you need two.
Preferably 3, but 7nm tapeout costs are yikes^2 so 2 it is.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
On the topic of consoles, just want to point out - Switch is steadily coming up from behind. A console this forum said would use AMD hardware to the point when it was announced it would use ARM+NV, it was instantly labeled a failure.

Consolers != PC Gamers. Sure, they may wax poetically about "the resolution" but end of the day software matters, and even Xbone X isn't doing too well. Not saying this will dictate next gen, but once again the underwhelming tech is proving to be a game changer (or just a fab, you decide!).
 
  • Like
Reactions: DooKey