So it's measuring, how much does a faster card consume when locked at low performance.RTX3080 110fps / 320W
RTX2080 60fps / 240W
![]()
If somebody's never seen a misleading marketing figure, let's just show them this picture.
So it's measuring, how much does a faster card consume when locked at low performance.RTX3080 110fps / 320W
RTX2080 60fps / 240W
![]()
No NVIDIA announced exactly that: FP32.They quote SHADER Flops.
Not FP32. Its similar to what they claimed that FP16 performs "the same", enhanced by all that GEMM stuff, in their A100 chip to native FP32.
In reality native FP32 will be exactly that, native FP32.
Nvidia itself is quoting the 36TFlop FP32 number, so that would seem to be the case. I'm actually quite surprised, but that is a lot of compute horsepower there if there's no gotchas. 29.8TF FP32 for $700 is a crazy value compared to the 13.4TF @ $1200 for the 2080 Ti or 13.8TF @ $700 for the Radeon VII.
Yeah, I agree with Glo here. They used the term "Shader-FLOPS" not "FP32 FLOPS". The INT cores actually do single-precision math (i.e. 32-bit) but just not floating point specifically. Going off of the SM diagram for A100, they don't list the INT cores as being capable of doing FP math either so my guess is either Nvidia tuned Ampere for graphics so that the the pipelines for an SM are (2) x 16-wide FP or 16-wide INT + 16-wide FP, or they are just listing Shader-FLOPS as a catch all term for all the concurrent single-precision math the entire GPU can do, INT and FP included. My money is on the latter.They quote SHADER Flops.
Not FP32. Its similar to what they claimed that FP16 performs "the same", enhanced by all that GEMM stuff, in their A100 chip to native FP32.
In reality native FP32 will be exactly that, native FP32.
Just like A100 has 16384 ALUs, and 40 TFLOPs of Native FP32, eh?No NVIDIA announced exactly that: FP32.
That's a nice but double-edged marketing gimmick. Nice because it's a huge number. Double-edged because suddenly their performance/TFlops just dropped masssively. You'll never hear anyone from NVIDIA talking about performance/TFlops again.No NVIDIA announced exactly that: FP32.
This is almost exactly what AMD did with their Polaris claims of 2x perf/watt. They took carefully chosen card models from both generations and then locked them at 60 fps on 1 game and used that to claim 2x perf/w. AMD was rightfully criticized for that and we should do the same for Nvidia here, the real perf/w gain will be no where near 90%.So it's measuring, how much does a faster card consume when locked at low performance.
If somebody's never seen a misleading marketing figure, let's just show them this picture.
We'll have to wait for testing to see. As I said previously, that kind of increase in FP32 performance would be really surprising and seems unlikely.Its 2xFP32 THROUGHPUT.
Its the same marketing gibberish they used for A100 chips.
In reality, when tested actual, native FP32 performance, it was the same, as V100.
3080 isn't mid tier, it's high end. It actually should be significantly closer to the 3090 than the 2080 was to the 2080 Ti. The 3070 at $500 seems a bit much though in comparison.JC! $700 for a mid-tier card?
Ok high-end, and not close to the highest end... whatever terminology is used, it's way overpriced IMO. I guess we will see the end result, but this year is not the year to release with these $$$ numbers associated.3080 isn't mid tier, it's high end. It actually should be significantly closer to the 3090 than the 2080 was to the 2080 Ti. The 3070 at $500 seems a bit much though in comparison.
Right, which makes its 10GB frame buffer that much harder to accept. But with say 20GB at $850 it would really make the 3090 look outlandish ¯\_(ツ)_/¯3080 isn't mid tier, it's high end. It actually should be significantly closer to the 3090 than the 2080 was to the 2080 Ti. The 3070 at $500 seems a bit much though in comparison.
$700 actually seems quite reasonable to me given the performance. The 3090 looks absurd though, I believe they are based on the same die and I don't see it being a whole lot faster than a 3080 so its price is somewhat bewildering. The 10 GB of VRAM on the 3080 is a bit disappointing but spending $800 more to get a decent at best performance bump and more RAM is pretty intense. I'm guessing Nvidia has other models in the works but are waiting for the new Radeon cards to drop to see how they want to price everything.Ok high-end, and not close to the highest end... whatever terminology is used, it's way overpriced IMO. I guess we will see the end result, but this year is not the year to release with these $$$ numbers associated.
I'm hoping they show a good demo of this "RTX IO" thing before the 3080 is available for purchase. This new decompression tech combined with a gen4 nvme drive may make 10 GB of VRAM more than enough.$700 actually seems quite reasonable to me given the performance. The 3090 looks absurd though, I believe they are based on the same die and I don't see it being a whole lot faster than a 3080 so it's price is somewhat bewildering. The 10 GB of VRAM on the 3080 is a bit disappointing but spending $800 more to get a decent at best performance bump and more RAM is pretty intense. I'm guessing Nvidia has other models in the works but are waiting for the new Radeon cards to drop to see how they want to price everything.
A100 was never marketed as having that number of ALUs, it was 8192.Just like A100 has 16384 ALUs, and 40 TFLOPs of Native FP32, eh?
I don't see how 1GB of RAM could make people refuse to upgrade. On lower end cards it can make a big difference, but the difference between 10 and 11 GB's is non-consequential when you are talking about a GPU that is over all significantly faster.How many 1080ti owners are going to see that buffer downgrade and take a pause on the 3080? I would, but I already sold my 1080 ti.
That's rational.I don't see how 1GB of RAM could make people refuse to upgrade. On lower end cards it can make a big difference, but the difference between 10 and 11 GB's is non-consequential when you are talking about a GPU that is over all significantly faster.
I agree. 10 GB vs 11 GB is no different.I don't see how 1GB of RAM could make people refuse to upgrade. On lower end cards it can make a big difference, but the difference between 10 and 11 GB's is non-consequential when you are talking about a GPU that is over all significantly faster.
I agree. 10 GB vs 11 GB is no different.
If you need that 1 GB, you better buy 3090, instead of 3080.
Yeah, the 3070 doesn't seem like it will be a value card at all. The 3080 has 50% more shaders and memory bandwidth than the 3070, for 40% more money. Compare that to the 1080/1070 where the 1080 was 33% more shaders for a 55% price increase.3080 isn't mid tier, it's high end. It actually should be significantly closer to the 3090 than the 2080 was to the 2080 Ti. The 3070 at $500 seems a bit much though in comparison.
Nvidia are no doubt getting a good deal.Well, at least the pricing is good. I imagine that Samsung gave Nvidia a really good deal on 8nm wafers, given how they screwed up on 7nm EUV.
Looking forward to actual reviews and deep dives.
It was quite disappointing that it was not 7LPP. Ampere could have stretched its legs.Well, at least the pricing is good. I imagine that Samsung gave Nvidia a really good deal on 8nm wafers, given how they screwed up on 7nm EUV.
Looking forward to actual reviews and deep dives.