Have we even had the much promised 960 review yet? I wouldn't hold my breath for AT to review any GPU's day 1...or week 1, it's not a new iPhone so not going to generate enough clicks.On a side note, why are Anandtech reviews always late? Are they struggling financially?
Before Anand sold, it used to feel much more alive, responsive and meaningful.
I'm not sure if it's the same person, but in another thread someone had posted that one of the AT editors who lives in Oregon was affected by the fires. I think that's a good reason for any delay if they were the person in question.Have we even had the much promised 960 review yet? I wouldn't hold my breath for AT to review any GPU's day 1...or week 1, it's not a new iPhone so not going to generate enough clicks.
Attempt to make up for other deficits?Hmm, the Whitepaper says the 3070 has 96 ROPs, same as 3080.
Seeing benchmarks today convinced me my money was better spent on Zen 3 before a RTX 3000 series upgrade since I game at 1440p. Which is fine with me, as that gives time for AIB partners to release boards with standard PCIe plugs and better cooling solutions for about the same price. Honestly, I'd like to see a 20GB RTX 3080 released. That would be enough VRAM and performance at 1440p for years to come.Does it feel to anyone else like
It's going to be fun to see when Zen3 gets released next month.
hardly,Looking just at compute:
RTX 3080 needs to run its INTs on cores which are also full fledged FP32 cores. Per your claim, out of 136 instructions, 100 of them are FP, or 73%.
Therefore, in a typical workload, you would expect an Ampere design which has the same amount of theoretical Teraflops as a Turing design to underperform by 27% (where is 5-7% from?) due to the INT workload not impacting FP on Turing. I have no idea where your 5-7% comes from.
This is still a good trade-off for nVidia because this design allows for the huge uptick in theoretical FLOPs via increased FP32 shader count which means that an Ampere design with the same number of FP shaders as a Turing design would be a much lower end part.
but are they truly a bottleneck? other than few exceptions (professional e-sports may be), the fps very quickly becomes irrelevant above two digits. and pros play with graphics turned way, way down so GPU power is irrelevant to them completely.Zen three isn't going to relieve the bottlenecks at low resolution with the new gen of GPU's. Is what it is.
Easy, because it won't bring enough IPC increase to do so. 1080p is hard bottlenecked at this time. 10900K has 10% in gaming at 1080p on current Ryzen. Games are getting more complex. 4XXX isn't a revolution so much as an evolution. It's going to take a revolution to get CPUs to get rid of low resolution bottlenecks. Intel isn't there yet and AMD isn't either. JMHO.you know this how?
This is very interesting.First x104 GPU with 96Rops.They increased it for first time since maxwell(maxwell/gtx980 was first with 64rops)But i didint find anything about new delta color compression.I dont see 3070 faster than 2080TI with same memory bandwidth as 2070/super without new delta color compression.Hmm, the Whitepaper says the 3070 has 96 ROPs, same as 3080.
It depends on if your power supply has that 550W in the correct rails needed for the card. EVGA is pretty good, so it is possible that they do have the circuitry to push 400-450W on the 12V rail (maybe even higher), but you need to really look at the underlying specs of your exact PSU and how it deals with power across the various rails (i.e. does it have efficiency losses/gains on one rail or another which cause the total 550W rating to not be attainable if say you need 100W for your CPU/RAM/motherboard on the 3.3V and 5V rails, limiting the 12V rail to only 400W or 350W and not actually hitting the 550W mark unless it is all on the 12V rail...Now that the reviews show total system usage of 523W under load for the 3080, does this mean my 550W EVGA B3 won't be able to handle it? Now, keep in mind my CPU is a 3600 with a B450 MSI Carbon board, not the power-sucker that they use in the review. I admit I don't fully understand the way power supplies work, but I've seen a lot of reviews where the 3950X draws 140+ W, compared to my 65W 3600.
Is it as simple as saying my total system draw with a 3080 would be closer to 445W? I think I would be just fine at that wattage (81% load). I really don't want to have to get a new PSU. Worst case scenario I throw in a 3080 and it just shuts down the PC (at which point I know I need a bigger PSU)?
EDIT: scratch that. I don't believe I have 2x8 pin connectors anyways, so I guess I would need a new one!
Good info, thanks. Does this tell me anything?It depends on if your power supply has that 550W in the correct rails needed for the card. EVGA is pretty good, so it is possible that they do have the circuitry to push 400-450W on the 12V rail (maybe even higher), but you need to really look at the underlying specs of your exact PSU and how it deals with power across the various rails (i.e. does it have efficiency losses/gains on one rail or another which cause the total 550W rating to not be attainable if say you need 100W for your CPU/RAM/motherboard on the 3.3V and 5V rails, limiting the 12V rail to only 400W or 350W and not actually hitting the 550W mark unless it is all on the 12V rail...
I actually forget what power supply I have in my current system, but I think it is either a 850W or 1000W as I had originally thought I was going to go SLI with the system but SLI was on its way out in terms of driver/game support so I never bothered.
|Total||550W @ +40C|
A problem with the 550 B3 is that it appears to only have one PCI-e power connector on the PSU with a dual connector cable. Nvidia's recommendation is that you use two separate PCI-e cables.Good info, thanks. Does this tell me anything?
This is my power supply:
Harbour Onbox said the room was 21 degrees.Food for thought. None of these reviews stated what the ambient temperature of their office or testing area was. My computer's thermals as a whole heavily differ if it's cooled down to 68-75*F, and gets even wilder above that all the way through 84*.
Looking at the numbers from ComputerBase, I became kind of curious about how Ampere fits in on a historical context. Especially with all the discussion about whether the 3080 should be considered part of the XX80 class of cards or the XX80 Ti class.We need to look at the improvement in perf/$ to see how the value proposition changes between generations, and this approach should be fair with respect to removing die size out of the equation.
Here's what Techspot got for $/frame @ 1440p and 4K:
And here's what ComputerBase tabulated for the perf gains & cost differences between generations. I've taken the luxury of revising the launch prices to USD instead of Euros. I will definitely add that MSRPs do NOT reflect market or street prices. Moore's Law is Dead mentioned this in his last video about Nvidia intentionally taking a profit margin hit on their launch MSRP just to make these types of comparisons super favorable.
Non Ti-Models Launch Price (USD) Performance Gain vs. Predecessor @ 2160p (4K) GTX 780 $649 Baseline GTX 980 $549 (-15%) +26% GTX 1080 $599 MSRP (+9%), $699 FE (+27%) +64% RTX 2080 $699 MSRP (+17%), $799 FE (+14%) +40% RTX 3080 $699 MSRP (+0%), FE (-14%) +65% Ti-Models Launch Price (USD) Performance Gain vs. Predecessor @ 2160p (4K) GTX 780 Ti $699 Baseline GTX 980 Ti $649 (-7%) +50% GTX 1080 Ti $699 (+8%) +75% RTX 2080 Ti $999 MSRP (+43%), $1199 FE (+72%) +35% RTX 3080 Ti / 3090 $1499 MSRP (+50%), FE (+25%) +50%(?)
|Thread starter||Similar threads||Forum||Replies||Date|
|Question RTX 3070 - We submit to our BOTS overlord and NVIDIA STORE DATA LEAK!!||Graphics Cards||14|
|nVidia 3090 reviews thread||Graphics Cards||114|
|Question Nvidia apologizes? Fact or fiction?||Graphics Cards||57|
|J||Question nVidia kills off SLI||Graphics Cards||6|
|Question [VideoCardz] NVIDIA RTX 3080 in Renders (nothing confirmed)||Graphics Cards||0|