• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Nvidia ,Rtx2080ti,2080,2070, information thread. Reviews and prices September 14.

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Fermi? Yes [March 2010]
Tesla? Yes [June 2008]

Fail at reading, fail at comprehension,
You fell for the 2 biggest nvidia marketing gimmicks ever.

Fermi...
March 26, 2010, GF100, GTX480 ... 480:60:48
November 9, 2010, GF110, GTX580 ... 512:64:48

Despite the gimmicky name change from 480 to 580... this was the same architecture... and clearly the biggest GPU in that line was not first.

Tesla...
November 8, 2006, G80, GTX8800 ... 128:32:24
April 1, 2008, G92, GTX9800, 128:64:16
June 17, 2008, GT200-300-A2, 240:80:32

Oh look... bigger chips launched later.


Any architecture with the tick tock gimmick is could never be said that the biggest die launched first...
 
Fail at reading, fail at comprehension,
You fell for the 2 biggest nvidia marketing gimmicks ever.

Fermi...
March 26, 2010, GF100, GTX480 ... 480:60:48
November 9, 2010, GF110, GTX580 ... 512:64:48

Despite the gimmicky name change from 480 to 580... this was the same architecture... and clearly the biggest GPU in that line was not first.

Tesla...
November 8, 2006, G80, GTX8800 ... 128:32:24
April 1, 2008, G92, GTX9800, 128:64:16
June 17, 2008, GT200-300-A2, 240:80:32

Oh look... bigger chips launched later.


Any architecture with the tick tock gimmick is could never be said that the biggest die launched first...
580 was just a respin of 480. 480 was the big chip just not fully enabled.

GT200 is not part of the original G80 series. Was its own new series in which big chip launched first. Nvidia always launched flagship big die first until Kepler
 
Fail at reading, fail at comprehension,
You fell for the 2 biggest nvidia marketing gimmicks ever.

Fermi...
March 26, 2010, GF100, GTX480 ... 480:60:48
November 9, 2010, GF110, GTX580 ... 512:64:48

Despite the gimmicky name change from 480 to 580... this was the same architecture... and clearly the biggest GPU in that line was not first.

Tesla...
November 8, 2006, G80, GTX8800 ... 128:32:24
April 1, 2008, G92, GTX9800, 128:64:16
June 17, 2008, GT200-300-A2, 240:80:32

Oh look... bigger chips launched later.


Any architecture with the tick tock gimmick is could never be said that the biggest die launched first...

GF100 had all those shaders on the chip, just not used for yields. GF100 is 529mm, GF110 is 520mm, so a smaller ship with tweaked transistors came later. Basing your argument on "number of Shader/TMU/ROPs is a very flawed and unreliable metric to use.

G80 (484mm) - G92 (324mm) included a die shrink and a tweak to the architecture to produce a smaller die that clocked higher.

Oh look... the bigger CHIP came first.

580 was just a respin of 480. 480 was the big chip just not fully enabled.

GT200 is not part of the original G80 series. Was its own new series in which big chip launched first. Nvidia always launched flagship big die first until Kepler

GT200 is still part of the Tesla family that G80 started.

Enough OT though.
 
Fail at reading, fail at comprehension,
You fell for the 2 biggest nvidia marketing gimmicks ever.

Fermi...
March 26, 2010, GF100, GTX480 ... 480:60:48
November 9, 2010, GF110, GTX580 ... 512:64:48

Despite the gimmicky name change from 480 to 580... this was the same architecture... and clearly the biggest GPU in that line was not first.

GTX480 - 529mm2
GTX580 - 520mm2

You said "Big die first has never happened since Tesla 1.0 generation", big die would obviously refer to die size for any reasonable person.

Tesla...
November 8, 2006, G80, GTX8800 ... 128:32:24
April 1, 2008, G92, GTX9800, 128:64:16
June 17, 2008, GT200-300-A2, 240:80:32

Oh look... bigger chips launched later.

Any architecture with the tick tock gimmick is could never be said that the biggest die launched first...

G80 and GT200 weren't made on the same node. G80 was on 90nm and GT200 was on 65nm. Arguing that G80 somehow wasn't a "big die" GPU, because Nvidia managed to squeeze more cores onto another die made on a smaller node, is just plain silly, and frankly ignorant.
 
We witness here the effect of titan having moved into its own segment. This void and price segment is now filled by the Ti.
This seems the case. The real question is will it deliver titan like performance for that price against previous generation Ti?

I think they have too much stock of the old gen and they want to push their Ray tracing stuff so now 1080ti will drop and they can get rid of that stock.
 
If real-time ray tracing gains a lot of traction in the future, will AMD be able to create their own specialized silicon to handle that? Will Nvidia's patents be a significant hurdle for AMD to design around? I have no idea how the patent laws work lol.
 
GTX480 - 529mm2
GTX580 - 520mm2

You said "Big die first has never happened since Tesla 1.0 generation", big die would obviously refer to die size for any reasonable person.



G80 and GT200 weren't made on the same node. G80 was on 90nm and GT200 was on 65nm. Arguing that G80 somehow wasn't a "big die" GPU, because Nvidia managed to squeeze more cores onto another die made on a smaller node, is just plain silly, and frankly ignorant.

Its neither silly nor ignorant. It PROVES that a TI release today is a marketing gimmick. 1 year from now when a new Turing chip is released on 7nm and more cores are enabled, everyone is going to complain... why did I buy the gimmick TI and not wait for this more enabled version...

Like it or not there are people who only buy once per architecture, and will only buy the biggest most mature, most enabled chip in that architecture.
This is nothing more than a change in naming to try to lure those people into falsely buying now, and they will regret it.
 
Honestly if this 2080 Ti is indeed 700+mm2 then I won't even complain about its $1,200 price. While I will agree that mask costs and R&D costs have increased per node (much of that absorbed by TSMC), the general trend is that consumers have been getting hit with higher prices per mm2.

Want proof? Look at Nvidia's gross corporate margins over the last five years. They simply are making more money for every $1 sold, and that is almost all from charging more for smaller dies.
 
If you have the top card for 1 year, you have nothing to complain about.

The only thing the Ti needs to prove to me is that its worth its price tag. It needs to be quite a lot faster than what we usually get.

Hopefully but I sincerely doubt that :/
Simultaneously I have a hard time believing that they will cost this much.
 
Why not charge record prices for the most marginal generational increase in 7 years?

No competition.

They have to get existing customers to upgrade, true. But many will pay the high prices for RTX 2000 series. For those that don't, we could have 7nm RTX 3000 series in H2 2019 offering more attractive performance-per-dollar that will get the more frugal customers to finally upgrade (and will get the price inelastic RTX 2000 customers to upgrade yet again).
 
This open air reference design is quite surprising. Surely they have to make a blower for OEM's and their no airflow cases?
 
This open air reference design is quite surprising. Surely they have to make a blower for OEM's and their no airflow cases?

There will be blowers available for cases that need it.

ASUS-GeForce-RTX-2080-TURBO.jpg


I guess they want better press for the FE Edition so open air it is.
 
$1000 and $1200 for AIB versions of 2080 Ti already leaked? Good grief.

I have been predicting all along that this release would be a very high prices, with poor price/$. People paying double for GPUs during the mining craze basically showed everyone in the supply chain, that gamers will pay through the nose for more performance.

Even before the RT HW and huge die sizes were revealed. But once that was revealed it was a lock.

We are essentially getting the 2080 for Ti prices, and the 2080Ti for Titan prices.

Basically every tier is moving up to the next tier of performance, but price is moving up a tier with it, so effectively price/performance is not budging.

If you want performance like the 2080, and don't care about Ray Tracing, you are probably much better off grabbing a deal on 1080Ti's.
 
meh, i'm already over it. Keeping the card I got and buying a new amp instead. I'm not a big gamer anyway.. $1200 for a graphics card is too rich for my blood.
Too many other things to spend money on.
 
I was hoping to upgrade from my GTX1080 since it doesn't quite push 1440p @144hz as well as I'd like but these prices are increasingly looking like I'll just wait until 7nm. Mining is dead so I don't see these GPUs selling well at these inflated prices, especially since most people are still gaming on 1080p monitors and even at 144hz a GTX1070 will do just fine.
 
I'd be more interested to get good price level G-sync monitors before I shelled out $1200 for a damn video card...
 
Status
Not open for further replies.
Back
Top