96Firebird
Diamond Member
Sucks about the lack of DP until 2018, but everything else on Vega 10 looks good. It's obviously going to be pricey, which is fine given how much the 1080 Ti and Titan X are.
How much is the 1080 Ti?
Sucks about the lack of DP until 2018, but everything else on Vega 10 looks good. It's obviously going to be pricey, which is fine given how much the 1080 Ti and Titan X are.
If so why use HBM over the latest DDR5X? You get all that expense, interposer complexity, memory size restrictions and (if I remember correctly) more latency, for a little higher bandwidth. Seems the worst of both worlds - either go DDR5X and have something that's cheap and flexible, or do HBM properly with 4 stacks and the appropriately huge performance.
Until AMD seriously closes the performance per watt gap with NVidia, they will never have the performance lead...
According to TPU, even at launch the 780ti did not "trounce" the 290x, but beat it by 8%.I cannot tolerate such distortion and revisionist history here, which completely disregards the facts. 🙄 First off, Tahiti didn't smash anything. For most of it's life span, Kepler easily competed with, and mostly outperformed Tahiti. It also trounced Hawaii as well in the form of the GTX 780 Ti.
Do you think that there is a real possibility that a ~500mm^2 Radeon won't beat a 1080? The question is by how much, and will it even be relevant as it'll probably be facing a 2080/1180 or a 1080ti.Vega needs to at least match the GTX 1080 for it to even be viable. The only benchmark we've seen so far is from Doom, but Doom has the advantage of using shader intrinsic functions which significantly increases performance for Radeons since they can use the same shaders as the consoles. So I wouldn't expect Doom to be an accurate predictor of performance for games in general.
Quoting midrange cards 1060/480 and then concluding that "Until AMD seriously closes the performance per watt gap with NVidia, they will never have the performance lead.." is a bit of a stretch? Define performance lead?
Noone cares if a 480 eats 100 150 or 200 watts.
..overall performance
I'm talking about how much performance they can extract out of each watt. Ever wonder why AMD GPU designs always seem to have a lot more ALUs than comparable NVidia designs? That's a consequence of AMD GPUs having less performance per watt than NVidia.
Basically, AMD GPUs need more hardware and consequently more wattage to get the same amount of work done as NVidia GPUs. As long as this imbalance exists, then AMD will never beat NVidia in overall performance..
Do you think that there is a real possibility that a ~500mm^2 Radeon won't beat a 1080? The question is by how much, and will it even be relevant as it'll probably be facing a 2080/1180 or a 1080ti.
Is this another way of say perf/watt ?
I'm talking about how much performance they can extract out of each watt. Ever wonder why AMD GPU designs always seem to have a lot more ALUs than comparable NVidia designs? That's a consequence of AMD GPUs having less performance per watt than NVidia.
Basically, AMD GPUs need more hardware and consequently more wattage to get the same amount of work done as NVidia GPUs. As long as this imbalance exists, then AMD will never beat NVidia in overall performance..
Ehmm no,
GTX 480 had 480 cores
HD5870 had 1600 cores
I dont believe i need to tell you that HD5870 had higher perf/watt than GTX480
Hmm, nope. Cypress was a TeraScale VLIW5 uarch, so divide SP count by 5 to get a number of cluster "units" - Cypress = 320 units/"cores"HD5870 had 1600 cores
1. HBM2 has the same bandwidth as HBM1 but at half the controller width. That means you save die space from the memory controller = smaller die = cheaper to produce.
2. HBM2 has 2x the capacity per stack vs HBM1. That means it is cheaper to implement the same capacity because you need less memory chips 2x 2GB for HBM2 vs 4x 1GB for HBM1.
3. Needing less memory chips decrease the complexity and size of the interposer = higher yields, smaller size = cheaper
4. Comparing vs GDDR-5X, HBM2 overall cost is not that much higher. With GDDR-5X you need 384bit memory controllers for less bandwidth than HBM2, that increase your die size, increase the Graphics Card PCB complexity and PCB size = higher BOM = higher cost.
Generally HBM2 is way better than GDDR5X in almost every metric.
No, overall performance is just an expression or result of perf/watt. Perf/watt is what the engineers go after when designing these GPUs, because it's what determines the actual performance characteristics. Overall performance is what we consumers see in the benchmarks..
You are over analyzing, gamers don't really care about wattage, most have decent PSUs (I know I do ie Seasonic 850w), they look at price to performance and 480 is up there with the best in that range,
Also don't forget cooling and wattage has dropped for AMD with the 480 for obvious reasons, so is it really even a factor nowadays unless you have a very low wattage PSU, (some would argue if you can afford a decent 480 card ,then you can afford a decent PSU for all your hardware.
I can tell you wattage is not even on my list when I choose between Nvidia and an AMD card, (yes I went XFX RX480 GTR black edition, awesome card 🙂 ) and that goes for CPU as well regardless of AMD or Intel etc..
How much is the 1080 Ti?
Bacon1 said:According to AMD, Vega 10 is small chip and Vega 11 is the large chip.
No, overall performance is just an expression or result of perf/watt. Perf/watt is what the engineers go after when designing these GPUs, because it's what determines the actual performance characteristics. Overall performance is what we consumers see in the benchmarks..
I know gamers don't care about wattage. Neither do I for that matter, as you can see by looking at my sig. However, the fact remains that performance per watt is what determines the final performance of a GPU. All you need to do is look at Kepler, Maxwell and especially Pascal. NVidia is able to get much more performance out of each watt than AMD because their hardware is more efficient. For AMD to match, much less beat NVidia, they have to make GPUs that are much bigger, hotter and more power hungry than the competition..
Now generally speaking, gamers don't care about wattage, especially if said GPUs are [A LOT[/b] faster the competition. But what happens when said GPUs consume significantly more power, but only match or are just barely faster than the competition?
Then it's like the HD 5870 vs the GTX 480 all over again.
So exactly like VideoCardz predicted at the time this thread started.
- Vega 11 replacing Polaris 10/11 (professional market) - makes me wonder if this is some sort of desktop version of Scorpio's iGPU
- High-end Vega 10 comes first (H1-2017): 12 TFLOPs vs Fiji's 8.6 TFLOPs (+40%)
- Vega 10 has the exact same number of stream processors as their 2015 flagship, so they need higher clocks and architecture improvements to move up performance
- Vega 20 looks like a 7nm shrink of Vega 10, same number of CUs again (64 CUs): will have to face Volta-based products in 2018
- Navi in 2019: Navi 10 positioned as the faster GPU, Navi 11 replacing Vega 11
Wrong. 🙁
Makes no sense whatsoever. Of course you can design a higher performance architecture even with a perf/watt penalty. It just means to your final design will consume more power in order to beat the competition in performance.
Certainly not less than $600, right?
Yet they didn't really succeed. Even with HD4xxx,HD5xxx, HD6xxx series, when they were on equal footing and they didn't succeed. Why? Because Nvidia sold more GPUs even with worse performance, price and performance per watt. They had three of a kind. Even now, when AMD is in bad state, strapped for cash, they don't have "dark triade" in GPU market. Worse performance, performance per watt and performance per dollar. Making money on bad chips while your opponent doesn't, will result in current situation where one player in the market just doesn't have enough money in R&D to compete.Well you had to way back to prove your point 😉
And I don't believe I need to tell you that the HD5870 was AMD's last GPU that truly gave NVidia a run for it's money. Meaning, that when AMD prioritizes perf/watt, they are more likely to succeed.
But what happens when said GPUs consume significantly more power, but only match or are just barely faster than the competition?