• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

AMD Vega (FE and RX) Benchmarks [Updated Aug 10 - RX Vega 64 Unboxing]

Page 75 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,600
6,084
136
Citations please.

19080247940l.jpg


Source:
https://www.overclock3d.net/news/gp...ega_utilises_their_new_infinity_fabric_tech/1
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Not quite what I was referring to. I don't expect AMD to open up the infinity fabric outside of AMD. I said that and could bypass the PCIe protocol and it's overhead and use infinity fabric communications between AMD Ryzen and VEGA.

Infinity Fabric is way more flexible than hyper transport ever could dream to be. And I expect if AMD chose to implement such a feature it will be very successful indeed.

Sent from my SM-G935T using Tapatalk

Inside the APU sure.

On the motherboard. Nope.

Also can you shut off your Tapatalk signature. No one cares what device you post from.

Edit: Fixed with the ignore filter.
 
Last edited:
  • Like
Reactions: Kuosimodo

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Not quite what I was referring to. I don't expect AMD to open up the infinity fabric outside of AMD. I said that and could bypass the PCIe protocol and it's overhead and use infinity fabric communications between AMD Ryzen and VEGA.

Infinity Fabric is way more flexible than hyper transport ever could dream to be. And I expect if AMD chose to implement such a feature it will be very successful indeed.

Sent from my SM-G935T using Tapatalk

You don't seem to realize that Infinity Fabric is a superset of Hypertransport. It even uses the same communication protocol. You could even call it Hypertransport 4.0. It's not magic.

AMD is in no position to take on the burden of going their own way with a proprietary peripheral interconnect. Even Intel doesn't do it, and if there was money to made doing it, Intel would.
 
  • Like
Reactions: Kuosimodo

estarkey7

Member
Nov 29, 2006
108
20
91
You don't seem to realize that Infinity Fabric is a superset of Hypertransport. It even uses the same communication protocol. You could even call it Hypertransport 4.0. It's not magic.

AMD is in no position to take on the burden of going their own way with a proprietary peripheral interconnect. Even Intel doesn't do it, and if there was money to made doing it, Intel would.
I do realize that. But I also realize it's much more sophisticated than the old Hypertransport. I never said that the GPU would only use Infinity Fabric. I said it could use Infinity Fabric when used with an AMD Ryzen CPU to give improved throughput without the PCIe overhead. If Vega was in an i7/i9 board, it would use plain old PCIe.
 
Last edited:
  • Like
Reactions: darkswordsman17

beginner99

Diamond Member
Jun 2, 2009
5,318
1,763
136
That is within an SoC. He is talking about bypassing PCIe over an infinity fabric instead. On an APU sure, which is an SoC. Between a CPU and a GPU? No (at least not yet?)

Obviously that would need motherboard support like nvlink but it's theoretically possible. Just look at EPYC. 2-socket epyc are connected via motherboard, yes 100% external from CPU/SOC) through infinity fabric. That is why 2p epyc has same amount of pcie lanes because the extra lanes are used for infinity fabric between the 2 sockets. Read the whole page.

Each Zeppelin die can create two PCIe 3.0 x16 links, which means a full EPYC processor is capable of eight x16 links totaling the 128 PCIe lanes presented earlier. AMD has designed these links such that they can support both PCIe at 8 GT/s and Infinity Fabric at 10.6 GT/s

Which we can deduce from that with motherboard support same could be done with a dGPU and NV already does this with NVlink. But of course this is far away from any consumer usage, probably forever as too expensive.
 
Mar 11, 2004
23,444
5,852
146
You don't seem to realize that Infinity Fabric is a superset of Hypertransport. It even uses the same communication protocol. You could even call it Hypertransport 4.0. It's not magic.

AMD is in no position to take on the burden of going their own way with a proprietary peripheral interconnect. Even Intel doesn't do it, and if there was money to made doing it, Intel would.

Not sure if you realize the logic you're using. You're arguing that AMD wouldn't use a proprietary interconnect (which no one is claiming; at least I'm assuming you seem to be thinking that AMD is going to change the physical connector, which he's explicitly trying to claim isn't necessary as HT is using very similar to PCI-e signaling, just requires both chips to have the proper signaling support), then pointing out that HyperTransport, which Infinity Fabric is based on, isn't even some new proprietary protocol. What you're arguing jives exactly with his point. AMD won't need a new connector or seemingly any major change to signaling setup currently available with PCI-e, and because they've already put the Infinity Fabric support in both the CPU and now Vega, they should be able to have it work.

And likewise, for the people saying that only makes sense or works within the same chip, AMD has already shown that isn't the case as that's how they're doing multiple CPU communication on their 2U Epyc line.

The question is, will that only be available on the pro/server/enterprise stuff, or will it have benefits downstream. The assumption is it should help on APUs (where it seemingly would help bolster the intra-chip communication), but there doesn't seem to be anything that would prevent it from being possible on Ryzen and Vega. But we don't know for sure. It could be artificial limitation (not enough resources to implement it on low margin stuff, maybe its just laying the groundwork for future stuff and so is there more for debug work for AMD and not be something that will be enabled for consumers on current Ryzen and Vega stuff). It might be something they'd only enable on Threadripper (possibly due to the PCI-e lane situation, so lower Ryzen chips wouldn't be able to support it, or maybe it'd be a feature they'd want to upsell Threadripper).

But AMD is explicitly planning for things to work somewhat like that, as its part of their basis for HBC and heterogenous compute. This is hardly crazy speculation. And AMD is trying to make it easy to implement, plus right now they're targeting less than supercomputer setups (where Nvidia's setup seems to be implemented, where they can afford to tailor design and resources to supporting Nvidia's connect), so it would seem that PCI-e is the best method for achieving that.
 

iBoMbY

Member
Nov 23, 2016
175
103
86
Infinity fabric is a protocol, it doesn't care what physical medium it communicates through.

That said it won't magically fix that medium's shortcomings. If PCI-E has overhead, so would IF talking over PCI-E.

If it is implemented right, it could switch from PCIe to IF, and not use PCIe over IF. PCIe is designed to be forward compatible. At the initialization both sides run on basic PCIe 1.0 and exchange information about their capabilities, and afterwards the devices are switched to the highest common denominator. It should be possible to switch to an ethernet physical layer based protocol (like Gen-Z), which doesn't need more pins per lane than PCIe.

Edit: CCIX is the better candidate for this though:

CCIX expands upon PCI Express to allow interconnected components to access and share cache-coherent data similar to a processor core. The CCIX interconnect operates at up to 25Gb/s per lane, providing 3X speed up over present industry standard PCIe Gen3 interfaces running at 8Gb/s, and 56% faster than upcoming PCIe Gen4 interfaces operating at 16Gb/s. CCIX is backward compatible with PCIe 3.0 and 4.0, leveraging existing server ecosystems and form factors while lowering software barriers

https://www.ccixconsortium.com/single-post/2017/06/28/Welcome-to-the-new-CCIX-Blog
 
Last edited:
  • Like
Reactions: darkswordsman17

stahlhart

Super Moderator Graphics Cards
Dec 21, 2010
4,273
77
91
Thread cleaned and reopened. Complaints about how much the moderation in this forum sucks go to Moderator Discussions -- stop derailing threads here with them.
-- stahlhart
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Yeah, about that ...


Putting something on a slide doesn't turn it into a product.

As already mentioned, IF just used PCIe lanes off socket, throwing IF protocol on top of PCIe isn't going to create any benefit for connecting off socket GPUs, which already use PCIe.

It would just be more work, more testing for no gain. It's pointless.
 

nathanddrews

Graphics Cards, CPU Moderator
Aug 9, 2016
965
534
136
www.youtube.com
Tom's Hardware has a pretty thorough review of Vega FE vs Quadro P6000. They figure that since the Titan Xp can't do 10-bit OpenGL overlays, but Vega FE and P6000 can... let them fight! Lots of 2D and 3D workstation benchmarks, plus the regular Cinebench, SPEC, gaming in DX11, DX12, OGL, Vulkan. Hot vs. cold power testing, etc.

http://www.tomshardware.com/reviews/amd-radeon-vega-frontier-edition-16gb,5128.html
 
  • Like
Reactions: wanderica and cm123

estarkey7

Member
Nov 29, 2006
108
20
91
Putting something on a slide doesn't turn it into a product.

As already mentioned, IF just used PCIe lanes off socket, throwing IF protocol on top of PCIe isn't going to create any benefit for connecting off socket GPUs, which already use PCIe.

It would just be more work, more testing for no gain. It's pointless.

You seem to feel that Infinity Fabric has to run on top of PCIe, which is wrong. AMD can configure the PHY (the physical hardware lines) to use Infinity Fabric protocol instead of PCIe if it detected a Vega GPU at the other end. No motherboard support required, no hardware changes, no proprietary interface, just driver support to simply re-purpose the PCIe connector.
 
  • Like
Reactions: tonyfreak215

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Tom's Hardware has a pretty thorough review of Vega FE vs Quadro P6000. They figure that since the Titan Xp can't do 10-bit OpenGL overlays, but Vega FE and P6000 can... let them fight! Lots of 2D and 3D workstation benchmarks, plus the regular Cinebench, SPEC, gaming in DX11, DX12, OGL, Vulkan. Hot vs. cold power testing, etc.

http://www.tomshardware.com/reviews/amd-radeon-vega-frontier-edition-16gb,5128.html

Page 11 is especially interesting.
GPU rises to 85C and levels off.
HBM rises to 95C and levels off... is the MEMORY thermally limited?! Could this be making real bandwidth much lower than design goals? We know they had to overvolt HBM and still couldn't meet their clock rate target.
 

[DHT]Osiris

Lifer
Dec 15, 2015
17,382
16,664
146
HBM rises to 95C and levels off... is the MEMORY thermally limited?! Could this be making real bandwidth much lower than design goals? We know they had to overvolt HBM and still couldn't meet their clock rate target.
Wouldn't surprise me, not the first time an unexpected (at least to us, maybe not to the design team) thermal issue crops up in a place it traditionally hasn't been seen before, see vrm's and m.2 SSD throttling.
 

Krteq

Golden Member
May 22, 2015
1,009
729
136
HBM rises to 95C and levels off... is the MEMORY thermally limited?! Could this be making real bandwidth much lower than design goals? We know they had to overvolt HBM and still couldn't meet their clock rate target.
Yep, HBM2 have a thermal sensor and they can "throttle" to lower temps.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Yep, HBM2 have a thermal sensor and they can "throttle" to lower temps.

I wonder if this could explain the odd results from PCGH.de where bandwidth was only being measured at ~300 GB/sec when it should be ~480.
If the memory is throttling (which I don't recall seeing or hearing about on any other GPU) then that would also go a long way toward explaining poor performance. It would be interesting to see if underclocking/undervolting the RAM gives better results, or at least the same results at lower power usage. HBM2 is being run way above official JEDEC voltage.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Wouldn't surprise me, not the first time an unexpected (at least to us, maybe not to the design team) thermal issue crops up in a place it traditionally hasn't been seen before, see vrm's and m.2 SSD throttling.

Yes.

It's same with CPUs and LEDs. At the beginning, they didn't even need a heatsink. Performance, is what drives them to need cooling, so heatsinks first, then heatsinks with fans on them.
 
  • Like
Reactions: [DHT]Osiris
Status
Not open for further replies.