Question When do we get a 1080ti replacment? Summer this year (2019)?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jpiniero

Diamond Member
Oct 1, 2010
5,984
123
126
#26
AMD just die shrunk Vega 64 all the way from 14nm to 7nm doubled its memory bandwidth or something. A while back doing that would have *destroyed* the old cards, now its just meh.
They didn't really raise the transistor count much though, and I imagine most of the additional transistors were used on 1/2 DP support.
 

WhiteNoise

Senior member
Jun 22, 2016
756
10
61
#27
The 1080Ti handles pretty much anything very well even at 4k so I'm fine with waiting. I'll be honest..I have the extra money to buy a new video card but I hate spending this much money every year or two so for once I do not mind a wait. When i can't play the newest releases well then that is when I'll be looking for a 1080ti replacement. I just hope I'll be spending under 1k to get it.
 

n0x1ous

Platinum Member
Sep 9, 2010
2,430
17
106
#28
No good news forthcoming amigo. Pony up or stay put for the foreseeable future.
 

maddie

Platinum Member
Jul 18, 2010
2,317
285
136
#30
I really don’t care what people want to say about companies, but lets be real about how hard the job that silicon engineers are doing is getting can we?

They’ve been fighting the laws of physics for us for a long time now and it is very obviously getting much harder.

Intel had one very delayed process and now in the middle of a *hugely* delayed one.

AMD just die shrunk Vega 64 all the way from 14nm to 7nm doubled its memory bandwidth or something. A while back doing that would have *destroyed* the old cards, now its just meh.
I would say that we're always fighting the laws of physics. We're (humanity) always at the cutting edge. Do you think supersonic flight, or the first steam engines, or even making bronze was easy for the day? Now individuals can replicate these things. Whatever breakthroughs are coming will not be seen until they happen. It's always this way, unless you believe that we're at the end of discovery.
 

njdevilsfan87

Golden Member
Apr 19, 2007
1,785
2
81
#31
Things are slowing down as they approach the limits.
I think I tend to agree with this. You have to wonder why we are starting to see things like Tensor and RTX cores, as opposed to just building FP32/FP64 monsters. Meanwhile Intel can't seem to progress further in IPC, and AMD has finally caught up - but instead of surpassing Intel in that regard, they're just piling on the cores.

I think Nvidia knows that for general purpose compute where FP32/FP64 cores are used - we are getting close to the limits. Die shrinks alone probably offer us another 70-100% performance from where we are today. That's really not that significant, but it at least surpasses VR 4K 90Hz.

Lots of apps that benefit from GPU parallelism benefit even more so with reduced precision or task specific cores, hence the Tensor and RTX cores. Optimizing for those will probably offer us another 100-1000% in tasks that can actually utilize those cores as we shrink the process. Let's hope DLSS or other machine learning super-resolution methods work well, because if they do, we'll actually gain more than if Nvidia had just piled on as many FP32 cores as possible.
 
Jul 12, 2006
92,815
1,157
136
#32
First of all, all 1080ti's come with at least two fans, unless your getting the founders edition.

Second of all, by that time, the GTX 10-series would have been two generations old, making it cheaper on the market to buy.

For example look at the GTX 780, an enthusiast card from two generations ago. It can still provide high-end gaming but might not get the highest framerate. The 780 is still very powerful. But newer cards are usually even more capable and at the same time draws less power.

Also some 1080ti's aftermarket cooler sport 3 fans.
Is this another AI-created post? What is with the forums these days?

lol--none of these detached comments are in any way referential to the OP. :D
 

Ajay

Diamond Member
Jan 8, 2001
4,850
57
136
#33
The limits are way farther away than you think. This pricing is Nvidia trying to be like Apple, the company that Jensen Huang likes.

Quite interesting that both are seeing their continously increasing margin business model collapse at the same time.
I wouldn’t say collapse, but both company’s top lines took a significant hit. They both are dealing with saturated consumer markets and have push pricing to consumer's pain limit.
 

Head1985

Golden Member
Jul 8, 2014
1,557
26
126
#34
AMD just die shrunk Vega 64 all the way from 14nm to 7nm doubled its memory bandwidth or something. A while back doing that would have *destroyed* the old cards, now its just meh.
Vega7 is more like 2900xt vs 3870.
 

Ajay

Diamond Member
Jan 8, 2001
4,850
57
136
#35
They didn't really raise the transistor count much though, and I imagine most of the additional transistors were used on 1/2 DP support.
Yeah, to some extent, I think Vega 20 was really a pipe cleaner for AMD on TSMC's 7FF process. They didn’t want to take the risk of running a truly large die ASIC.
 

Qwertilot

Golden Member
Nov 28, 2013
1,395
34
106
#36
Yes they didn't change too much in Vega 20 (2 billion extra transitors), but they didn't gain a whole lot of power reduction, extra clock speed or anything else either. Or real cost reductions.


I would say that we're always fighting the laws of physics. We're (humanity) always at the cutting edge. Do you think supersonic flight, or the first steam engines, or even making bronze was easy for the day? Now individuals can replicate these things. Whatever breakthroughs are coming will not be seen until they happen. It's always this way, unless you believe that we're at the end of discovery.
Of course we're not done with discovery :) The thing is that discovery normally happens at unpredictable moments and takes a long time to bring to commercially viable production. What they've been doing with silicon chips the past few decades really has been utterly mind boggling.

The regular massive gains ever year or so are getting close to running out though. There will of course be much faster computers in a few decades from now one way or another.
 

Ajay

Diamond Member
Jan 8, 2001
4,850
57
136
#37
Yes they didn't change too much in Vega 20 (2 billion extra transitors), but they didn't gain a whole lot of power reduction, extra clock speed or anything else either. Or real cost reductions.
Well price wise,Vega 20 is probably expensive to make - new more expensive process, lower yields (early in manufacturing maturity cycle) and lots of HBM.
 

Innokentij

Senior member
Jan 14, 2014
228
3
81
#38
Well, what's your take? For the 2 or 3 of you who might suggest that the 2080ti is worth considering, I'll remind you that it is not worth considering. With that out of the way, Nvidia has to replace the 1080ti at some point. Its been nearly 2 years and no signs of a replacement. They know they need to offer something with a similar price and way better performance or people just won't ditch their 1080ti's, so when do you expect that to happen? GTX 1180ti this summer maybe?
It’s called the 2080ti no matter what you like so it’s already out. Time to face reality
 
Feb 2, 2009
12,827
119
126
#39
Machine learning and RayTracing are the new areas that all three GPU manufacturers will pursue from now on.
Both of those are implemented in DX-12 and all three (AMD, Intel and NV) will invest substantial amount of transistors for those functions inside their upcoming GPUs.

TU102 (RTX 2080Ti) die size is 754mm2 at 12nm , even porting it at 7nm it will be more than 400mm2.
So next year a RTX3080 (close to RTX2080Ti performance +10%) will not be priced less than what RTX 2080 is priced today (MSRP $699).

So if you want more than 30-40% higher performance over your GTX1080Ti at the same $699 MSRP price you payed back in 2017, you may have to wait even more than the summer of 2020.
 
Feb 2, 2009
12,827
119
126
#40
Yes they didn't change too much in Vega 20 (2 billion extra transitors), but they didn't gain a whole lot of power reduction, extra clock speed or anything else either. Or real cost reductions.
Actually Vega 20 only has 730M more transistors over Vega 10 and at the same clocks , Vega 20 has ~50% less power than Vega 10.
 
Feb 2, 2009
12,827
119
126
#41
Nvidia drew their line in the sand with RTX. They couldn't afford to abandon it or cross compete with themselves without destroying their stock price even further. What I do believe is possible if RTX implementation continues to be uninspiring and not worth the performance hit is a 'soft' reboot on next series of RTX cards. Branding remains same, but tensor cores potentially reduced in combo with higher clocks to maintain 1:1 in that area, whole focusing all advantages from new process and die area into improved shader performance. This would give a substantial increase in practical performance that would inspire upgrades. I don't even believe a 300w 7nm RTX2 flagship could really do justice to raytracing in any respectable manner. Not even if you had an entire extra 300w card full of tensor cores.

Too ambitious by far I believe. But a soft reboot of it that remains niche in use but not further murdering standard perf/$ metrics could help them save face. It's only pure luck for them that AMD has nothing out the door that really exposes them that it's not far worse for Nvidia. As I've noted before, their own 10xx lineup owners have nothing to buy unless they spend dump truck full of gold bars for 2080ti.
If im not terrible wrong , tensor cores doesnt do anything for RayTracing. If a 7nm RTX card can have more RT cores then yes it will be faster in RayTracing than 12nm Turing.
 

Qwertilot

Golden Member
Nov 28, 2013
1,395
34
106
#42
The tensor cores are entirely intrinsic to NV's planned ray tracing pipeline - basically they let them render/'repair' a scene rendered with relatively few rays much better than anything else. Only that one demo using them that way so far though.

Actually Vega 20 only has 730M more transistors over Vega 10 and at the same clocks , Vega 20 has ~50% less power than Vega 10.
That first bit seems right, the second is surely very unlikely? As shipped, the power draw on Vega 10 and 20 are identical and the clocks on Vega 20 are about 15% higher. That really doesn't suggest a halving of the power draw at iso clocks.

I know it was on those slides but those are, I'm pretty sure, just copying over the theoretical gains possible from the die shrink. It'll take more effort than they put into Vega 20 to get even those :(
 
Oct 27, 2006
19,529
82
106
#43
If im not terrible wrong , tensor cores doesnt do anything for RayTracing. If a 7nm RTX card can have more RT cores then yes it will be faster in RayTracing than 12nm Turing.
Right, though it looks like even if you had 4x the RT performance of 2080ti, you still would have nowhere near enough to run 4k/60 with more than a token effects presence ala BF4 (which already takes a whopping penalty even at 1440p). It's even enough to rule out the viability of Ultra settings @ Gysnc/Freesync 1440p/100hz+.

On the flip side, about 50% of an RTX 2080ti card is Tensor+RT. If it were the same size, but just double the traditional arch, it would have been a massive leap upwards.

I'm wondering whether or not RT will just crap out early, before coming back again later on, like some of the early VR hype did back in the day. It just doesn't seem viable with process tech slowing down so much, prices being too painful to be relevant to the mass market who is in the $100-$250 range, and looking at that kind of time it might take what, 8-10 years before a $300 card could do even 4K/30 raytracing in a useful manner?

The final nail in the coffin I believe will be that the PS5/Xbox Scarlett on RTG Navi (almost certainly already nearing final specout over the next 6-10 months so that manufacturing and supply contracts can be drawn up and committed to) have virtually zero chance of having anything more than the most token RT support, and more likely zero. Game development is incredibly expensive, and almost all the flagship level AAA stuff has to be developed to fit/optimize onto consoles. PC market is in much better shape than it was during 7th gen, but it's still not the driving force behind the kinds of titles like Assassin's Creed, COD, Madden, Fortnite, Far Cry, etc. So devs dedicating much time towards optimizing for something only a tiny fraction of even upper-range PC gamers can touch (and likely turn off due to performance not being good enough) .. it's a bridge too far I think.

I think Nvidia was very bold here, but I think this is looking like a failure.
 

Qwertilot

Golden Member
Nov 28, 2013
1,395
34
106
#44
They need some much more convincing software than thus far, that's for sure

I'm still not sure about a fully 'doubled up' conventional card. Wouldn't that double the power draw during conventional games too? That'd be a bit mad :)

It's possible this is, in part, a reaction to the ending of the 'free' power reductions from process shrinks.
 
Oct 27, 2006
19,529
82
106
#45
I can't say for sure. I haven't seen anything that shows RTX cards using notably more or less power when running with RTX features on or off, so it seems like the entire package is responsible for the power and thermals to one degree or another.

Past history strongly shows that frequency increases draw eventually exponential increased power draw to achieve. Eg; some card may use only 105w @ 1.6Ghz, but same thing to reach 2Ghz takes 240w. So even if a traditional non RTX die of the same total square MM and rough transitor count of 2080ti would in fact use more power, it is likely that you could bring it to a sweet spot by managing clocks/power carefully.

Of course, it's all hypothetical to be sure. I'm not an electrical engineer, to say the least, let alone one with experience in high level semiconductor manufacturing. This is all educated guesswork :) About all I can guarantee is that having RTX take so much die space IS a compromise to one degree or another, how much is the question, and whether or not it will pay off.

I realize that my perspective is quite dismal on this so far. I regret that level of cynicism, as when I first heard of this dramatic raytracing idea for implementation I was pretty excited. Yet after seeing how things are going and the costs in technical and financial terms has deeply disappointed me. I feel like it's going to go down as an incredibly bold failure. Sort of like Nvidia's Operation Market Garden. The only caveat is that AMD RTG is also pretty stumbly of late, more so in fact without even the interesting crazy RTX features.
 

Veradun

Senior member
Jul 29, 2016
254
16
86
#46
I think nvidia put themselves in a corner with RTX and 750mmq chips. The economics of 7nm (please note nvidia will use Samsung's, that is not yet ready for HVM) are really bad for chips like this.

So they would need large chips (around 450-500mmq) to compete with their own previous (now current) lineup and it doesn't bode well for pricing.
 

NTMBK

Diamond Member
Nov 14, 2011
8,167
148
126
#47
You're probably stuck waiting until 7nm EUV is ready... so 2020, I'd guess.
 

moonbogg

Diamond Member
Jan 8, 2011
9,744
22
126
#48
You're probably stuck waiting until 7nm EUV is ready... so 2020, I'd guess.
I can't imagine this being the case. It's insane if true. So many 10 series people have nothing to upgrade to. It's going to be a long time if nothing happens until 2020. I'm a die hard GPU enthusiast and I completely disregard the entire 20 series lineup as if it was never even released. I know I'm not significant alone, but I don't think I'm quite alone here. Nvidia needs to sell something or they are in for quite a dry spell, especially if they come out with more hilarious prices next year.
LOL, I can't wait to find out just how hard I'm capable of laughing if they release the next Ti card for something like $800-$1,000, lol. I'll just hang it up and move on.
 
Nov 28, 2013
1,395
34
106
#49
If things really aren't selling then next year they'll maybe shovel everything down a tier cost wise or something.

So the xxTi down from Titan pricing to normal xxTi pricing etc.

The next xxTi card will likely launch at those sorts of prices though, with Titan off at Pro/Compute prices, they're being priced in the old Titan price bracket.
 

moonbogg

Diamond Member
Jan 8, 2011
9,744
22
126
#50
If things really aren't selling then next year they'll maybe shovel everything down a tier cost wise or something.

So the xxTi down from Titan pricing to normal xxTi pricing etc.

The next xxTi card will likely launch at those sorts of prices though, with Titan off at Pro/Compute prices, they're being priced in the old Titan price bracket.
Well then they price themselves into obsolescence and they will go out of business just like 3DFX. AMD APU's and Intel's new innovations will meet market demand easily. Good riddance I say.
 


ASK THE COMMUNITY

TRENDING THREADS