Krteq
Senior member
- May 22, 2015
- 990
- 671
- 136
It's more about the feature set than anything else. Vega 10 doesn't have DP nor any kind of DL accelerator.
It's more about the feature set than anything else. Vega 10 doesn't have DP nor any kind of DL accelerator.
Now I have to start saving for the upcoming GV104 part that's just around the corner. Crazy how fast the PC GPU market is moving! I haven't even had my 1080 for a year and it's already outdated not only by a higher tier Pascal card (that I was expecting) but by a whole new architecture and process node (this release date took me by surprise).
Vega 10 have a FP64 support in 1:16 rate.
Vega 10 doesn't have DP nor any kind of DL accelerator
Vega 10 supports DPFP with 1:16 rate.
You know what I mean. GV100 is 1:2.
No. People don't know what you mean. We can't read your mind. People only know what you write. Just admit you were wrong and move on.
Vega 10 supports DPFP with 1:16 rate.
Vega get 12.5 TFlop @ 300watt on a significantly smaller GPU (smaller dies are harder to cool), so whats with all the doom and gloom? NV's biggest advantages in gaming has nothing to do with ALU, for a long time (fermi?) they had a front end/geometry setup advantage which got fixed with Polaris. Then since maxwell they have had a very big raster advantage.
Zen and Vega are the first major CPU and GPU design efforts by AMD in half a decade. AMD has proved with Zen they can design a power efficient and competitive CPU core. So there is no reason to doubt that they cannot design a power efficient GPU architecture. Anyway the goal for AMD is to close the efficiency gap with Zen/Vega against Intel and Nvidia and continue to keep doing so. AMD does not need the performance crown. They need power and area efficient architectures (perf/watt and perf/sq mm). Most importantly they require consistently good execution. With competitive CPU and GPU architectures AMD has the unique capability of providing the world's best x86 APUs - something which Intel and Nvidia cannot do . There is one key missing technology which needs some time to be available in volume and at good yield/cost -HBM. HBM2 is just starting to become available but HBM3 and low cost HBM might be the tipping point when AMD's Fusion concept truly delivers to its potential. HPC server APUs, Game console APUs, Notebook / Desktop APUs are all going to need massive bandwidth at 7nm as its a massive process node shrink of almost 60% and extremely powerful GPUs are going to be packed into all these chips. Truly AMD has a bright future. They just need to keep building on the strong foundations they are building today.![]()
I don't doubt AMD's capability to design and build power efficient architectures, whether CPU or GPU. But I do doubt their ability to match NVidia in that category and harken back to the days when they traded blows and were on much more even footing. I really want Vega to be something special and blow the doors off of everyone's expectations, but the fact that we are this close to the launch and AMD still seems so hesitant to share performance estimates makes me wary. Although I am partial to NVidia, I understand that a strong AMD means a stronger NVidia as well, and is beneficial to the consumer.
I also think that it will be 2560 CUDA cores, which already brings it to the level of the GTX 1080, and I'm expecting 30 percent additional improvement due to the architectural changes, which would enable it to match the GTX 1080 Ti. This has been the trend in the past - 780 Ti vs 970, 980Ti vs 1070.
I think that the 2070 will get the same core count as the 1080, but with a modest improvement in clocks, faster GDDR6 and architectural improvements it should catch up with the 1080Ti. Kinda like what happened with GTX 680->GTX 770.Nvidia's goal with GTX 2070 would be to match GTX 1080 Ti just like they did with GTX 970 vs GTX 780 Ti. Cuda core count would be 3200 (50 SM) or 3328 (52 SM). I don't expect a Kepler->Maxwell like jump in perf/core. I would say even if they get 10% better perf/core they would have done a fantastic job given the power efficiency gains.
I think that the 2070 will get the same core count as the 1080, but with a modest improvement in clocks, faster GDDR6 and architectural improvements it should catch up with the 1080Ti. Kinda like what happened with GTX 680->GTX 770.
By that I mean that the core config would be the same as the GTX 1080, just like the 680 and 770. Besides the x70 part usually has half the number of cores of the largest fully enabled part.Yes, this is likely.
Although I believe the 770 was just a rebrand of the 680.
I wonder what the 2080ti will be like. Bring it. Its probably 18-24 months out at this point at least,
It appears that both "brand cheerleading camps" are getting way ahead of ... everything.
The hype on both brands is enormous. Cool it down guys. Interesting times ahead.
about which we don't know anything
Let's revisit this when you can actually get a consumer Volta-based GPU.What hype? Volta exists. It can be ordered today.
I wonder what the 2080ti will be like. Bring it. Its probably 18-24 months out at this point at least,
Then I must have missed any gaming benchmarks of Volta based GPUs. Can you handle me any one of them?What hype? Volta exists. It can be ordered today.
Where's Vega?
i expecting volta will have better delta color compresion and 2070 wont need GDDR6.I think that the 2070 will get the same core count as the 1080, but with a modest improvement in clocks, faster GDDR6 and architectural improvements it should catch up with the 1080Ti. Kinda like what happened with GTX 680->GTX 770.
I don't think GDDR5X would be used any longer. By early 2018 Hynix, Micron and Samsung should shave GDDR6 available in plentiful supply.i expecting volta will have better delta color compresion and 2070 wont need GDDR6.
2080
3584SP
256bit 12GB GDDR6 14-16Ghz
10-20% faster than 1080TI
2070
2560sp
256bit 12GB GDDR5x 11-12Ghz
2070 will run max out with low oc headroom.10% slower than 1080TI
Edit:
2060
1792SP
192bit 9Ghz DDR5 or 10Ghz GDDR5x
10%faster than 1070