Vega FE vs. Fury X at Same Clocks: "IPC" Testing [Gamer's Nexus]

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
Making this as its own post since it's quite good information and will likely be buried in the megathreads.

http://www.gamersnexus.net/guides/2977-vega-fe-vs-fury-x-at-same-clocks-ipc

vega-v-furyx-firestrike-ultra.png


vega-v-furyx-firestrike-normal.png


vega-v-furyx-firestrike-normal_fps.png


vega-vs-furyx-mll-4k.png


vega-vs-furyx-mll-1080p_avgfps-line.png


vega-v-furyx-specviewperf.png


Here’s what we’ve got: with 3DSMax, the Vega FE card has a stock weighted performance of 149.3FPS, with the 1050MHz version at 121.5FPS – so positive scaling of 23% -- with the Fury X at 92.7FPS. That places the Vega FE 1050MHz card at 31% ahead of the Fury X at 1050MHz.

Moving to Catia, the Vega: FE card stock operates 19% faster than the 1050MHz card, which in turn operates 55.4% faster than the same-clocked Fury X.

The Energy test gains about 2x performance by moving to the Vega card – note, this isn’t power consumption, it’s a specific test named “energy” in the SPECviewperf suite.

Maya posts scaling of 40% for the FE card, which seems to align with other Dx11 tests.

The SNX test is one of the most interesting. This is generated from Siemens NX software, with model sizes that are 7 million to 8.5 million vertices in complexity. The Fury X gets eviscerated here, and is multiplied nearly 7x over in performance by the clock-for-clock Vega: FE card. With this particular pro application, Vega: FE appears to be showing its strengths in vertex processing or potentially in memory capacity.
 
May 11, 2008
22,052
1,369
126
Why is the FE with 3d software for development so much faster than the fury in comparison to firestrike which supposed to be a benchmark ?

CPU bottleneck or better written applications ?
Or it is really the driver not being suited for games and game benchmarks ?



EDIT:
After reading the whole story:
Maybe (just a guess), vega can switch modes of operations. Immediate mode rendering and tile based rendering.
But now only the fully compatible immediate mode is used only. If the tile based renderer would make a mistake, that would be a disaster for professional use.
But with the tile based renderer for games this would be much less of an issue and would be solved easily with a driver update. A glitch in a game is much less an issue of course if it is just a graphical artifact except when this would cause the game to crash..
I would not say that tile based rendering is bad, but it is new for AMD and to avoid any compatibility issues , this would be a great solution if the card can be set up in such ways.
When the game version of vega arrives it will be interesting.
This all of course if the rumours are true that Vega does do tile based rendering.

edit: Tile based rendering here means use of the draw stream binner.
 
Last edited: