Nvidia ,Rtx2080ti,2080,2070, information thread. Reviews and prices September 14.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

tamz_msc

Diamond Member
Jan 5, 2017
3,821
3,641
136
AdoredTV was probably wrong at this. He thought 2080Ti and 2080 will be one die (we already know it is not), and 2060 with 2070 to be another die. Instead, it seems to be the usual: Tu102 will be used for Quadro, Titan and RTX 2080Ti, Tu104- for RTX2080 and RTX2070, and Tu106- for variants of GTX 2060.
TU104 is also what powers the Quadro RTX 5000, except it's fully enabled on that card.
 

Cableman

Member
Dec 6, 2017
78
73
91
I currently have a GTX1070 and I'll wait for the 7nm next year (or 2020) to upgrade. I am waiting for more 4k 144hz monitors (at reasonable prices) before considering a GPU upgrade. The 1070 is still plenty for my current needs.
 

vailr

Diamond Member
Oct 9, 1999
5,365
54
91
Hmm, currently running a Seasonic 520w Fanless 80 Plus Platinum: http://www2.seasonic.com/product/platinum-520/

Wonder whether this will be man enough to run both a 5.1Ghz 7700k (with NZXT X62) and a 2080 Ti.
No, 520W would not be adequate. There are online PSU requirement estimators for current cards.
At least a 600W PSU is required for a 1080Ti + non-overclocked CPU. The 2080Ti would require even more power to run. So: probably a quality brand PSU of ~700W would be about right.

Side note to OP: if you could learn how to adjust the font on your oversized "cut and paste" posts, that would be great.
What you can do is: copy and paste to Notepad, then re-copy that Notepad text and re-paste to this forum.
This procedure would easily normalize the font size.
 
Last edited:

gdansk

Platinum Member
Feb 8, 2011
2,116
2,615
136
How much faster per clock must the new Turing be for RTX 2080 to be faster than the GTX 1080 Ti?

Anyway, my estimates on relative performance (at 3840x2160):
GTX 1080: 100%
GTX 1080 Ti: 135%
RTX 2080: 140%
RTX 2080 Ti: 180%

Let's see how wrong I am on Monday. Well, I'll have to wait for reviews but Nvidia usually comes up with some estimates too.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
How much faster per clock must the new Turing be for RTX 2080 to be faster than the GTX 1080 Ti?

Anyway, my estimates on relative performance (at 3840x2160):
GTX 1080: 100%
GTX 1080 Ti: 135%
RTX 2080: 140%
RTX 2080 Ti: 180%

Let's see how wrong I am on Monday. Well, I'll have to wait for reviews but Nvidia usually comes up with some estimates too.

I think those guesses are pretty spot on, which may (or may not) explain the rumored TDP bump, which in and of itself may (or may not) be true. I think the 2080 will be slightly faster than the 1080 TI, the 2070 will be about 10% faster than a 1080, and from the current rumors the 2060 will be half of the 2080 like the 960 was to the 980, so it would be about the speed of what a 1070 is now.

Leaving that big of a gap between the 2070 and 2060 begs for a 3rd GT104 consumer SKU (RTX 2060 TI), but I doubt there would be enough dies to fill that spot given that GT104 is already going in 1-2 Quadro parts and 2 RTX parts.
 

amenx

Diamond Member
Dec 17, 2004
3,909
2,130
136
OP, pretty sure incorrect title: "Reviews and prices Monday August 20."

Monday will be announcement day when we will learn the actual release dates. Reviews will only be on release dates.
 

RichUK

Lifer
Feb 14, 2005
10,320
672
126
Rumours are nVidia make their announcement on Monday and AIB cards are shown on Wednesday.
 

gdansk

Platinum Member
Feb 8, 2011
2,116
2,615
136
Rumours are nVidia make their announcement on Monday and AIB cards are shown on Wednesday.
That'll be a predicament if there's a Fermi Edition announced again. Do I pre-order that immediately or wait for the AIB editions? I'm guessing these will all be out of stock for quite some time so may as well wait.
 

maddie

Diamond Member
Jul 18, 2010
4,744
4,678
136
With the increased power consumption, the demand from the mining market should be subdued, even with the increased performance. I suppose you can split half the performance increase to get the increase in perf/watt. For example 135% performance increase gives ~ 117% perf/Watt increase. Not enough to inflate prices.

Rather good for gamers.
 

24601

Golden Member
Jun 10, 2007
1,683
39
86
With the increased power consumption, the demand from the mining market should be subdued, even with the increased performance. I suppose you can split half the performance increase to get the increase in perf/watt. For example 135% performance increase gives ~ 117% perf/Watt increase. Not enough to inflate prices.

Rather good for gamers.
You should be calculating the mining benefit from the improvements of 14gbps GDDR6 over the 10gbps GDDR5x, aka memory bandwidth and memory latency.

All the algos that aren't memory bandwidth/latency bottle-necked are fpga/asic algos anyways.
 
  • Like
Reactions: maddie

gdansk

Platinum Member
Feb 8, 2011
2,116
2,615
136
With the increased power consumption, the demand from the mining market should be subdued, even with the increased performance. I suppose you can split half the performance increase to get the increase in perf/watt. For example 135% performance increase gives ~ 117% perf/Watt increase. Not enough to inflate prices.

Rather good for gamers.
Some have said 30W of the TDP increase is for VirtualLink. If you're not using that then perf/watt would be even higher.
 

sao123

Lifer
May 27, 2002
12,648
201
106
anyone know the reason behind the simultanious launch of the xx80 and xx80ti?
generally speaking XX80 and XX70 launch and the TI models come a year later...

i can't believe they are releasing the big die consumer GT102 chip at initial launch... this goes against 20 years of past product... is this nothing more than a marketing gimmick?

Is this simply a shift in naming, and something else will launch in the ti usual spot (another GT102 chip variant or even a GT100 or GT200) with more Cuda Cores etc, closer to the Quatro 6000 minus the double precision compute ability... which could be then named the 2090/2090 ti?

Or is this architecture going to be that short lived that there reallly only be 1 generation of products based on it?

this. does. not. compute.
 

gdansk

Platinum Member
Feb 8, 2011
2,116
2,615
136
anyone know the reason behind the simultanious launch of the xx80 and xx80ti?
generally speaking XX80 and XX70 launch and the TI models come a year later...

i can't believe they are releasing the big die consumer GT102 chip at initial launch... this goes against 20 years of past product... is this nothing more than a marketing gimmick?

Is this simply a shift in naming, and something else will launch in the ti usual spot (another GT102 chip variant or even a GT100 or GT200) with more Cuda Cores etc, closer to the Quatro 6000 minus the double precision compute ability... which could be then named the 2090/2090 ti?

Or is this architecture going to be that short lived that there reallly only be 1 generation of products based on it?

this. does. not. compute.
Possibly because the RTX 2080 isn't impressive enough? Or maybe they think 7nm chips will be launched in in a year or so, not giving them enough time to stagger the Ti.
 

24601

Golden Member
Jun 10, 2007
1,683
39
86
anyone know the reason behind the simultanious launch of the xx80 and xx80ti?
generally speaking XX80 and XX70 launch and the TI models come a year later...

i can't believe they are releasing the big die consumer GT102 chip at initial launch... this goes against 20 years of past product... is this nothing more than a marketing gimmick?

Is this simply a shift in naming, and something else will launch in the ti usual spot (another GT102 chip variant or even a GT100 or GT200) with more Cuda Cores etc, closer to the Quatro 6000 minus the double precision compute ability... which could be then named the 2090/2090 ti?

Or is this architecture going to be that short lived that there reallly only be 1 generation of products based on it?

this. does. not. compute.
It's because the launch of Turing suffered the double whammy of being late due to slow GDDR6 ramp as well as the last crypto peak.
 

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
Possibly because the RTX 2080 isn't impressive enough? Or maybe they think 7nm chips will be launched in in a year or so, not giving them enough time to stagger the Ti.

This is likely the case. NVIDIA can't wait to release the 2080TI because by that time 7nm will be ready to roll. Releasing the 2080TI now gives them $$$ and with the big jump coming at 7nm many 2080TI owners will jump to new cards. Pretty sad that NVIDIA is competing with themselves at this point with no competition from AMD likely until 2020+.
 

sze5003

Lifer
Aug 18, 2012
14,183
625
126
anyone know the reason behind the simultanious launch of the xx80 and xx80ti?
generally speaking XX80 and XX70 launch and the TI models come a year later...

i can't believe they are releasing the big die consumer GT102 chip at initial launch... this goes against 20 years of past product... is this nothing more than a marketing gimmick?

Is this simply a shift in naming, and something else will launch in the ti usual spot (another GT102 chip variant or even a GT100 or GT200) with more Cuda Cores etc, closer to the Quatro 6000 minus the double precision compute ability... which could be then named the 2090/2090 ti?

Or is this architecture going to be that short lived that there reallly only be 1 generation of products based on it?

this. does. not. compute.
We will see tomorrow about the performance. But I think it's due to the crypto boom that inflated prices, they knew it wouldn't be a good time to launch. It's also possible that this generation will be short seeing how the speculated performance across the models are about 25% better than the previous generation.

Your theory about them launching something better than what would be actual Ti performance like we are used to could be true too.
 

Timmah!

Golden Member
Jul 24, 2010
1,419
631
136
anyone know the reason behind the simultanious launch of the xx80 and xx80ti?
generally speaking XX80 and XX70 launch and the TI models come a year later...

i can't believe they are releasing the big die consumer GT102 chip at initial launch... this goes against 20 years of past product... is this nothing more than a marketing gimmick?

Is this simply a shift in naming, and something else will launch in the ti usual spot (another GT102 chip variant or even a GT100 or GT200) with more Cuda Cores etc, closer to the Quatro 6000 minus the double precision compute ability... which could be then named the 2090/2090 ti?

Or is this architecture going to be that short lived that there reallly only be 1 generation of products based on it?

this. does. not. compute.

I have hard time believing its happening either... and only will when i finally see it tomorrow... but 20 years? Thats some history revisionism out there... the big die has been first to be released up until gtx 500 series... which was released in 2011 and not 20 years ago.
 

neblogai

Member
Oct 29, 2017
144
49
101
Some have said 30W of the TDP increase is for VirtualLink. If you're not using that then perf/watt would be even higher.

But would that count into TDP? Those 30W will not be creating any work or heat- only passing through. So the cards should have TDP as their own heat output, only have bigger power inputs, designed to receive 30W extra for VirtualLink .
 

maddie

Diamond Member
Jul 18, 2010
4,744
4,678
136
Some have said 30W of the TDP increase is for VirtualLink. If you're not using that then perf/watt would be even higher.
Can't agree with that. Probably narrow minded posters who can't imagine Nvidia using more power than the previous generation. A taboo in their minds.

If the link uses ~ 30W, then that will be on top of the TDP. TDP is for a single card.
 

maddie

Diamond Member
Jul 18, 2010
4,744
4,678
136
You should be calculating the mining benefit from the improvements of 14gbps GDDR6 over the 10gbps GDDR5x, aka memory bandwidth and memory latency.

All the algos that aren't memory bandwidth/latency bottle-necked are fpga/asic algos anyways.
Very true.

Didn't think of that, however it seems the ratios are roughly the same for memory data bandwidth improvement, so for a rough estimation, still relevant.
 

24601

Golden Member
Jun 10, 2007
1,683
39
86
Very true.

Didn't think of that, however it seems the ratios are roughly the same for memory data bandwidth improvement, so for a rough estimation, still relevant.

Not remotely accurate, as the entire reason why 1080 and 1080 Ti sucked so much for mining was due to the unique latency problems of GDDR5x.

And we don't have performance figures yet for rasterized workloads, so there's no point in speculating about performance at all (aka games).

Many things important to IPC changed between GP104/GP102 and TU1xx.
 
  • Like
Reactions: DooKey
Status
Not open for further replies.