Does the RTX series create an openning for AMD?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
I think AMD is finished competing with Nvidia in the $300usd+ market.

With RTX prices, the $300 market will swift to entry level. Im sure they can afford another 300-500mm2 die.

The only architecture they have that is competitive is polaris. They need to get polaris on 12nm or 7nm with higher clocks to compete with the 1050/1060/2060.

Polaris was introduced in 2016, are we all here suggesting that AMD doesnt have a better architecture to release in 2018-2019 ??
From what we have seen from the RavenRidge APUs, Vega mArch is doing ok, even without any tinkering and just porting it to 7nm will make it more than enough to compete in the $600+ market against RTX2070 and RTX 2080.

Vega was absolute trash and everyone who adopted one was absolutely ripped off by AMD.

Sorry but at launch, Vega 56 was easily faster than GTX1070 at same price ($399) and Vega64 was very close to GTX1080 at lower price ($499).
 
  • Like
Reactions: guachi

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Vega as gaming GPU is pretty much dead chip. Why even bother with very expensive shrink if they have newer gpu architecture in pipeline ? Nonsense....

It just to show that even just porting Vega to 7nm will make them compete easily with the RTX cards up to RTX2080. I didnt say they will do it.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
AMD is focused on their long term goals at this time. It makes more sense for them to focus on products that have the highest probability of sales and ignore the ones that are more or less bargaining chips against others.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Sorry but at launch, Vega 56 was easily faster than GTX1070 at same price ($399) and Vega64 was very close to GTX1080 at lower price ($499).

Months before Vega Launch, NVidia officially cut MSRP of GTX 1080/1070 to $499/$349. So there were no price performance benefits to Vega on paper, and not in reality either, because mining exploded prices, more on AMD cards than on NVidia ones.

It just to show that even just porting Vega to 7nm will make them compete easily with the RTX cards up to RTX2080. I didnt say they will do it.

Yes because 7nm is the magic process, it makes every design win. ;)

It looks like there will be a lot of disappointment, if 7nm doesn't turn out to be a universal panacea, that makes every chip as good as people can imagine.
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
I want an apple to apple detailed comparison between Nvidia and AMD's real time hybrid ray tracing solution. No marketing nonsense and no cornball claims coming from the denoised/upsampled output. How much is the ray trace portion actually processing in raw compute numbers. If Nvidia truly is doing 10x as much processing than AMD, Jensen wouldn't shut his mouth about it. So, they likely are not. Ray trace cores are just a bunch of ALUs in the SM that takes the place of the double precision compute portion of Volta. It's clocked and locked at the same rates at the rest of the SM. So, they're not doing anything magical in them.
Yeah, we should believe you over the rest of engineers and developers who adopted the tech.
If Nvidia truly is doing 10x as much processing than AMD, Jensen wouldn't shut his mouth about it.

NVIDIA didn't reference AMD once since the Kepler days, they are not even on the map as NVIDIA. That's how pathetic AMD has become.
 
  • Like
Reactions: PeterScott

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Yes because 7nm is the magic process, it makes every design win. ;)

It looks like there will be a lot of disappointment, if 7nm doesn't turn out to be a universal panacea, that makes every chip as good as people can imagine.

All you have to do is take a look what an impact a puny half node process had on GTX285 (55nm) vs GTX280 (65nm).

Now do the math for a full node process like TSMC 7nm vs 16nm. You dont need to have a PHD in astrophysics to see that with a 7nm VEGA, AMD will be able to compete easily in non RT workloads up to RTX2080.

1a-TSMC-7-speed-power.png
 
  • Like
Reactions: Flayed

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
All you have to do is take a look what an impact a puny half node process had on GTX285 (55nm) vs GTX280 (65nm).

Now do the math for a full node process like TSMC 7nm vs 16nm. You dont need to have a PHD in astrophysics to see that with a 7nm VEGA, AMD will be able to compete easily in non RT workloads up to RTX2080.

1a-TSMC-7-speed-power.png

Time will tell if idealized projections, materialize in the real world.
 

arandomguy

Senior member
Sep 3, 2013
556
183
116
I mean they already showed this when Fury (28nm) to Vega (14nm) easily achieved a 30% performance uplift at half the die size and much lower power. This stands to reason that Vega to 7nm will be able to do the same as well.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
28nm to 14nm was more about the introduction of FinFet, than the die shrink when it comes to the power improvements.

There is no similar tech jump this time that I am aware of.

How well 7nm process (and individual designs using it) goes, still remains to be seen in practical terms.

It seems most people are just assuming perfection, which often sets up for later disappointment.
 
Last edited:

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
I mean they already showed this when Fury (28nm) to Vega (14nm) easily achieved a 30% performance uplift at half the die size and much lower power. This stands to reason that Vega to 7nm will be able to do the same as well.

Might want to check your numbers.

Fury 598mm2 to Vega 486mm2.

Only the power save settings used less than a Fury X:
https://tpucdn.com/reviews/AMD/Radeon_RX_Vega_64/images/power_average.png

While only the non-save settings (which use more power than Fury X) managed to even get close to 30% performance gain:
https://tpucdn.com/reviews/AMD/Radeon_RX_Vega_64/images/perfrel_3840_2160.png

Fury-Vega is a huge disappointment seeing as it went from 28-14nm.
 

arandomguy

Senior member
Sep 3, 2013
556
183
116
Might want to check your numbers.

Wasn't being serious, it was in response the discussion chain about 14nm to 7nm.

If we did use Fury to Vega as the example it'd mean they would be competing against RTX 2080 using a 7nm GPU of roughly the same size and fully enabled and likely more expensive memory. Not exactly sure if competing via price would the move there.
 

Hitman928

Diamond Member
Apr 15, 2012
6,696
12,373
136
Might want to check your numbers.

Fury 598mm2 to Vega 486mm2.

Only the power save settings used less than a Fury X:
https://tpucdn.com/reviews/AMD/Radeon_RX_Vega_64/images/power_average.png

While only the non-save settings (which use more power than Fury X) managed to even get close to 30% performance gain:
https://tpucdn.com/reviews/AMD/Radeon_RX_Vega_64/images/perfrel_3840_2160.png

Fury-Vega is a huge disappointment seeing as it went from 28-14nm.

Others showed Vega more favorably:

QUaoooL.png

https://www.computerbase.de/2017-08...#abschnitt_ultrahdbenchmarks_in_3840__2160_4k

But yes, Vega was still a disappointment from a gaming perspective. They threw a bunch of stuff in it for compute / professional use that bloated the die size and power use and some of the promised features to help the gaming side were MIA.
 

Ottonomous

Senior member
May 15, 2014
559
293
136
Others showed Vega more favorably:

QUaoooL.png

https://www.computerbase.de/2017-08...#abschnitt_ultrahdbenchmarks_in_3840__2160_4k

But yes, Vega was still a disappointment from a gaming perspective. They threw a bunch of stuff in it for compute / professional use that bloated the die size and power use and some of the promised features to help the gaming side were MIA.
Unrelated question sorry, but why does vega take such a little performance hit with HDR at high resolutions? I've never understood how this happens at an architectural level (color-compression?).

Could this have implications for HDR in turing?
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Fury X to Vega 10 came with a 40% increase of transistors and a different mArch. This comparison has nothing in common with just porting directly Vega 10 to 7nm.

But if you want to compare 28nm vs 14nm look at Hawaii (R9 290X) vs Polaris RX480. Polaris at 14nm with fewer transistors (5.7B vs 6.2B) has the same performance or better than Hawaii at half the TDP (290W vs 150W) and almost half the die size (438mm2 vs 232mm2).

Porting Vega 10 to 7nm and just increase 10% the performance will create a product close to RTX2070 with higher perf/watt at non-RT workloads and with a smaller die. We could have a Vega Nano with only 150-170W TDP.

The only problem i see is not if this is doable but rather if AMD will be able to release such a product 6-9 months before NV will be able to release a 7nm SKU. In order for such a product to succeed, they will have to release it in Q1 2019. NV will not release a 7nm product (consumer) before September 2019 at the earliest. Otherwise they better wait and go with Navi in H2 2019 or even 2020 and try compete directly with NV at 7nm.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
The funniest thing about this thread....Those who wish AMD were competitive to keep nvidia pricing in check also say AMD doesn't have a chance.

Monopoly tax is real money....Not monopoly money.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
https://www.youtube.com/watch?v=WoQr0k2IA9A&list=LLAZUTUJGRCVV5Jvkk5k69wQ
I think AMD is finished competing with Nvidia in the $300usd+ market. The only architecture they have that is competitive is polaris. They need to get polaris on 12nm or 7nm with higher clocks to compete with the 1050/1060/2060. Vega was absolute trash and everyone who adopted one was absolutely ripped off by AMD.

Arguably, the NVidia 50/60 series are THE most important GPU targets, and IMO it seems unlikely that the new 2050/2060 will have any RT elements. It looks like the performance target for 2080Ti in RT games is 1080p/60FPS, so there is no way a 2060 or below class card can have playable frame rates, so it makes sense to leave out RT HW on lower end cards.

This means NO RT-Tax to exploit. AMD will need to go head to head against these cards in standard raster workloads, when NVidia won't be wasting silicon on RT HW.

IMO AMD has it's work cut out trying to win back gamers after mining drove even more buyers to NVidia. Just look at the Steam HW survey to see how few Radeon RX cards are in the hands of gamers, despite GTX1060 and RX 480/580 being essentially equivalent in price and performance.

TLDR; Yeah they need a new competitor ready for 2060/2050 ASAP, maybe even striking first for a change.

Edit: Fix, added missing NO.
 
Last edited:

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
This means RT-Tax to exploit. AMD will need to go head to head against these cards in standard raster workloads, when NVidia won't be wasting silicon on RT HW.

You left out the important NO.

IMO AMD has it's work cut out trIying to win back gamers after mining drove even more buyers to NVidia. Just look at the Steam HW survey to see how few Radeon RX cards are in the hands of gamers, despite GTX1060 and RX 480/580 being essentially equivalent in price and performance.

AMD should just let you guys that voted with your wallets enjoy the high prices. I don't think it's that they can't make great products, but more like they choose not too at this time. I don't really see anything in it for AMD when in the end most end users really only want a better price on their nvidia product of choice. Devoting the resources to make decent products that they can sell while building back up the company image is what's best for AMD in their current situation. AMD staying focused on the big picture is best for them....What the big picture is only the future will reveal.
 
  • Like
Reactions: guachi

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
AMD should just let you guys that voted with your wallets enjoy the high prices. I don't think it's that they can't make great products, but more like they choose not too at this time. I don't really see anything in it for AMD when in the end most end users really only want a better price on their nvidia product of choice. Devoting the resources to make decent products that they can sell while building back up the company image is what's best for AMD in their current situation. AMD staying focused on the big picture is best for them....What the big picture is only the future will reveal.

How do they build their reputation without having great products?

As far as only wanting a deal on NVidia product, I think I am quite open minded about the options. I was thinking of upgrading my old system with a new GPU, GTX 1060/RX 480 class. Intially I looked at both. But my old computer/PSU only has a 6 pin PCI GPU power cable, so the 8 pin RX 480 was eliminated immediately. People say power doesn't matter but can matter a lot when upgrading old computers needing to stay in a power envelope.

Power usage is likely a big part of why NVidia owns the laptop dGPU business as well. AMD needs to catch NVidia on efficiency.

This thread is about openings, and they might have a small window where they could potentially be on 7nm before NVidia, since NVidia chose to do a new series on 12nm (16nm+).

IMO, That potential opening should be aimed at the critical GTX 50/60 series segment, because that is the majority of the GPU card market, and huge chunk of the laptop dGPU market. That's a massive Total Addressable Market that AMD is currently nearly shut out of (more of shutout on the laptop side).

I am hoping AMD silence is not indicative of nothing going on, but simply a change form the Pre Vega Launch days of constant empty hype.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
How do they build their reputation without having great products?

As far as only wanting a deal on NVidia product, I think I am quite open minded about the options. I was thinking of upgrading my old system with a new GPU, GTX 1060/RX 480 class. Intially I looked at both. But my old computer/PSU only has a 6 pin PCI GPU power cable, so the 8 pin RX 480 was eliminated immediately. People say power doesn't matter but can matter a lot when upgrading old computers needing to stay in a power envelope.

Power usage is likely a big part of why NVidia owns the laptop dGPU business as well. AMD needs to catch NVidia on efficiency.

This thread is about openings, and they might have a small window where they could potentially be on 7nm before NVidia, since NVidia chose to do a new series on 12nm (16nm+).

IMO, That potential opening should be aimed at the critical GTX 50/60 series segment, because that is the majority of the GPU card market, and huge chunk of the laptop dGPU market. That's a massive Total Addressable Market that AMD is currently nearly shut out of (more of shutout on the laptop side).

I am hoping AMD silence is not indicative of nothing going on, but simply a change form the Pre Vega Launch days of constant empty hype.

Zen based offerings aren't great products? How'd that work out In the end? Intel offers more cores, drops prices, and the majority of end users pimp Intel as the best option for just about anybody's use case without a second thought. AMD is selling Zen offerings no doubt, but not as many as they could due to the die hard mentality of the end user base. It take time to overcome the decade of despair that AMD has suffered. I guess in the end AMD at least gets the satisfaction knowing they forced Intel to give the end user more for the money.

When it comes to gpus the goal post always shifts to whatever the end users wants to focus on. If AMD caught Nvidia on efficiency then the focus would be raw horsepower. If AMD matched Nvidia on both raw horsepower and efficiency then the focus would be on temps, noise, drivers, added features, and the list goes on. I would think anybody who followed the scene would know it's how it works.

When I was into PC gaming I wouldn't have used a old power supply for my buying decision. If I was interested in a card and my power supply was old and didn't have the required connector I'd just buy a new one. Sounds more like a decision to justify ones own buying decision too me.

Yes AMD should try and take advantage of their 7nm window of opportunity. Not saying they will or won't as only AMD knows the answer. I agree they might just be staying silent and getting ready to take advantage of it. They could also be focusing on other things or fulfilling other contracts.
 
  • Like
Reactions: MangoX

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Zen based offerings aren't great products? How'd that work out In the end? Intel offers more cores, drops prices, and the majority of end users pimp Intel as the best option for just about anybody's use case without a second thought. AMD is selling Zen offerings no doubt, but not as many as they could due to the die hard mentality of the end user base. It take time to overcome the decade of despair that AMD has suffered. I guess in the end AMD at least gets the satisfaction knowing they forced Intel to give the end user more for the money.

You are the one who said:
"I don't think it's that they can't make great products, but more like they choose not too at this time. "

Choosing not to build great products, is a failing strategy IMO.

Ryzen is a great product, which AMD is winning back market share with, and that is what it takes. You can't expect to just win the whole market over night because of a great product.

You keep building great products and you keep winning market share, and you keep building your reputation.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
I suspect you're right re: no RT/AI hardware in 50/60 denominated products. So the only opening for AMD would be a sufficiently large/fast die that performs in rasterization at or above 2070 level's rasterization while costing at or below 2060 level

And given lead times it basically has to already exist and can't be that far off from this target already
 

JustMe21

Senior member
Sep 8, 2011
324
49
91
While Nvidia has improved on their power efficiency, their "performance" tends to be because their increased cores for their next gen model. E.g. GTX 680 - 1536 CUDA vs 780 - 2304 CUDA, then architecture improvements 980 - 2048 CUDA vs 1080 - 2560 CUDA vs 2080 - 2944 CUDA. Generational performance increases are less if you didn't look at model # but their core counts. But, Nvidia does have good speed and performance for what people want. Ray Tracing is nice for the casual gamer and possibly things like MOBAs, but in fast paced competitive gaming, people tend to not max out detail for better performance, so Ray Tracing would be left disabled.

AMD does seem to indicate there will be some performance increases by the Vega shrink, so hopefully it will do better on its 7nm shrink.

I think AMD and Intel will eventually working on incorporating GPU features into the actual CPU architecture so it can run at the faster clock speeds and communicate directly with the CPU and utilize a quick cache for faster communications between the 2. It would also be nice if the APUs also allow Crossfire capabilities between the APU and add-in cards. This might be the time Nvidia is seriously threatened, unless they can clock their GPUs to a similar clock speed as today's CPUs.
 

guachi

Senior member
Nov 16, 2010
761
415
136
No. AMD doesn't have an opening. Mining and high prices have effectively killed high end PC gaming for me. Nvidia shows no signs of reigning in high prices with their new cards.

Consumers have shown they will never buy AMD cards in quantity no matter how competitive they are so why should they even try? Consumers want to overpay for Nvidia cards and overpay they shall.

I'm glad I got my 480 for $200 as I'll be stuck with it for some time. The result is my PC game purchases have dwindled to near zero (one. One game that isn't graphic intensive).
 

JustMe21

Senior member
Sep 8, 2011
324
49
91
Hopefully Intel's offering makes an impact, but I wouldn't be surprised if it's mainly for higher level computing and virtualization.
 

Geegeeoh

Member
Oct 16, 2011
147
126
116
Unrelated question sorry, but why does vega take such a little performance hit with HDR at high resolutions? I've never understood how this happens at an architectural level (color-compression?).

Could this have implications for HDR in turing?
HDR needs memory bandwidth. Vega has a lot while nVidia compressione can't help much here.
 
  • Like
Reactions: Ottonomous