Nvidia ,Rtx2080ti,2080,2070, information thread. Reviews and prices September 14.

Page 46 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

sze5003

Lifer
Aug 18, 2012
14,320
683
126
It's actually very simple. The people spending the money on the new Ti want the best card they can get. At this point in time there's no doubt the new Ti will be the fastest card you can buy for $1200. This card isn't for people that would never spend that on a video card and it isn't for a person who counts $/FPS. What's so hard to understand?

Even if it is the best card you can get, sure because there is no competition, those people probably already have a more than capable gpu like me and many others that like to buy the newest techy things. I always buy new stuff when it comes out but not if it doesn't offer me any benefits.

I'm probably going to buy one, but I want to make sure it's the best and fastest via actual results rather than what I'm told by Nvidia and others, or hearsay. It's common sense but I guess not everybody has that.

I guess the simple answer is people that preordered immediately don't care and are ignorant. They don't know it's the fastest and greatest any more than we do.
 
Last edited:

Innokentij

Senior member
Jan 14, 2014
237
7
81
Even if it is the best card you can get, sure because there is no competition, those people probably already have a more than capable gpu like me and many others that like to buy the newest techy things. I always buy new stuff when it comes out but not if it doesn't offer me any benefits.

I'm probably going to buy one, but I want to make sure it's the best and fastest via actual results rather than what I'm told by Nvidia and others, or hearsay. It's common sense but I guess not everybody has that.

I guess the simple answer is people that preordered immediately don't care and are ignorant. They don't know it's the fastest and greatest any more than we do.

But we do, die size don't lie. Want the latest and greatest and not to wait for round 2 of cards you pre-order. DooKey hit the nail on the head. You on the other hand...
 

sze5003

Lifer
Aug 18, 2012
14,320
683
126
But we do, die size don't lie. Want the latest and greatest and not to wait for round 2 of cards you pre-order. DooKey hit the nail on the head. You on the other hand...
I like to spend my money logically, sure die sizes don't lie but give me freaking results. Would it have killed them to present this data at the event? Had they done this I would have likely pre-ordered too.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
I like to spend my money logically, sure die sizes don't lie but give me freaking results. Would it have killed them to present this data at the event? Had they done this I would have likely pre-ordered too.

Then you aren't a typical Titan buyer, and a $1200 early adopter card is not for you, so you wait.
 
  • Like
Reactions: DooKey

Brahmzy

Senior member
Jul 27, 2004
584
28
91
Duh? WTF do you think happens when you buy technology?
It’s all obsolete before you get it. Obvious statements are obvious.
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
Even if it is the best card you can get, sure because there is no competition, those people probably already have a more than capable gpu like me and many others that like to buy the newest techy things. I always buy new stuff when it comes out but not if it doesn't offer me any benefits.

I'm probably going to buy one, but I want to make sure it's the best and fastest via actual results rather than what I'm told by Nvidia and others, or hearsay. It's common sense but I guess not everybody has that.

I guess the simple answer is people that preordered immediately don't care and are ignorant. They don't know it's the fastest and greatest any more than we do.

No, it will be the fastest card. No doubt whatsoever...what we do not know is by what margin and whether the price justifies that difference. You were right to wait. Soon you'll be able to grab Asus, EVGA and other cards with better fan systems(even though this reference design is pretty good). I've always liked though how fans on many cards these days can turn off at idle and other features like that.
 

Innokentij

Senior member
Jan 14, 2014
237
7
81
I like to spend my money logically, sure die sizes don't lie but give me freaking results. Would it have killed them to present this data at the event? Had they done this I would have likely pre-ordered too.

Benchmarks will be out 14th September, pre order then and you get next batch. Or you could have pre-ordered day 1 and just cancel it if you don't like what you see when benchmarks are out and get it day 1 with rest of us.
 

sze5003

Lifer
Aug 18, 2012
14,320
683
126
Benchmarks will be out 14th September, pre order then and you get next batch. Or you could have pre-ordered day 1 and just cancel it if you don't like what you see when benchmarks are out and get it day 1 with rest of us.
Yea that's my plan I definitely want the next Ti. I even began looking into 4k monitors but I may just stick with my 1440p gsync. You are correct there is no penalty for preordering except I dunno how Newegg handles cancellations. Also at the time I had no idea Nvidia would put the squeeze on AIB partners with the Fe clocks. I usually like getting the aftermarket models.
No, it will be the fastest card. No doubt whatsoever...what we do not know is by what margin and whether the price justifies that difference. You were right to wait. Soon you'll be able to grab Asus, EVGA and other cards with better fan systems(even though this reference design is pretty good). I've always liked though how fans on many cards these days can turn off at idle and other features like that.
Yea I want aftermarket in the end. Had a 1070 evga ftw and it was great, sold it to someone on these forums early last year. Bought the auros oc gigabyte 1080ti and had a lot of stability issues with it. Had some trouble returning it to Newegg as they didn't take returns. They had me contact gigabyte and by the time gigabyte got back to me and said since you are within the return period just get another, Newegg sold out so they refunded me instead. Finally ended up with the Asus strix oc model which has been great.
 

Innokentij

Senior member
Jan 14, 2014
237
7
81
Also a thing I think we seeing for first time since OG Titan is we get a real Titan in 2080ti 50 dollar cheaper then last Gen. I suspect there will be no Titan this generation. If I am right this is a steal. Disclaimer this is wild speculation on my part, if you read into it and get triggered please un hook keyboard and don't reply.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Also a thing I think we seeing for first time since OG Titan is we get a real Titan in 2080ti 50 dollar cheaper then last Gen. I suspect there will be no Titan this generation. If I am right this is a steal. Disclaimer this is wild speculation on my part, if you read into it and get triggered please un hook keyboard and don't reply.

Many of us said the same thing. Titan is usually an early adopter card for the Biggest chip of the run. But there will be no bigger chip and it is already here with early adopter pricing. So a Titan really makes little sense this generation.
 

amenx

Diamond Member
Dec 17, 2004
4,705
2,999
136
The problem here is that he wasn't bashing their product. He stated something pretty straight forward and sound. To go and terminate his sponsorship is one thing... To be petty and ask for your goofy gear back is a whole other level. This is what happens when you have too many fratboi chads in your business/marketing team who think they can treat anyone however they like because their fee-fees got hurt. This is not how you run a business in todays age where these locker room level stunts make it to the front page and cast a big negative shadow on you as a company.

I'll come right out and say it... I was in the midst of getting an application processed for one of their programs and I'm sort of glad I didn't get it. I don't like the feeling of being censored or restricted especially when the opinions/ideas are grounded and reasonable. As a result of observing this fiasco and others, I'm going to do everything I can possibly do to have zero contracts with hardware vendors. I have after-all bought all of my hardware with my own dime up until now w/ zero support.

I'm currently having a discussion with other informed individuals on a different platform and the consensus is the following :

They achieve this functionality via a hybrid overlay solution / Hybrid pipeline.
Tensor cores and ray trace cores are packed into the SM. Ray trace cores have been placed where the prior double precision floating point logic was placed. They generate a BVH data structure and use it to guide traditonal rasterization and also use it to feed the ray trace cores in parallel. The ray trace cores calculate a series of intersection and produce a quite noisy image. This noisy image is 'overlayed' over the traditional rasterizer pipeline output and then tensor cores are used to do "AI - meme learning based denoising' : https://research.nvidia.com/sites/default/files/publications/dnn_denoise_author.pdf.
sigg17_denoiser_001.jpg

The penalty hit comes from the extension of the graphics pipeline to a new process conducted by the tensor core denoising process. The ray trace core calculations and results happen in parallel w/ the rasterizer pipeline and the result is quite noisy and ugly. The real magic happens by fusing this to the rasterized image in the tensor cores. When not used to denoise the ray tracing output, the tensor cores are restructured to support DLSS.

There's issues w/ pixel flicker and noise still present in the image. There's also issues with ghosting in the case that they don't do per frame ray trace rendering and there's 'hold over' shadows. They'll zero this in over time. The Star Wars demo had a 45ms per frame render time using the ray trace cores. This is somewhere around 22FPS. So, you either decide you want higher quality ray trace results and lower FPS or higher FPS and lower quality/non-per frame ray trace results. I could imagine a slider for adjusting this functionality.

The reason there is a lack of details and benchmarks is because the performance is going to be all over the place and the details as to how this works is quite complicated. I consider this to be a beta level dev board for gaming applications integrated into 1080ti. The speedups will come from GDDR6, a transition from14nm to 12nm, and the architectural changes they made to the caching structure to allow for a hybrid ray trace/rasterizer pipeline. When you turn RTX off, the traditional GPU pipeline probably gains access to a larger cache space that would otherwise have been dedicated to the ray trace pipeline. You will get a performance hit in FPS when you turn ray tracing on because the graphics pipeline has new stages regarding the "AI denoising" and upsampling of the low res ray tracing results. You cannot use raytracing w/o denoising them w/ the tensor cores. While the ray trace Op runs in parallel w/ the rasterizer pipeline, the post processing denoising does not.

I'm torn whether or not to purchase one of these cards and dedicate resources to evaluating its functionality. I feel there will be some time before its supported in Vulkan and I don't feel CUDA 10 will be released immediately. I also feel there are a good amount of things they're going to disable from the Quadro variants.

On pricing, it's simple... A 2080 costs as much as an entry level Quadro now. Nvidia was pissed because people were using 1080ti FE's in the data center and in rendering farms, so they changed the EULA agreement to combat it. No one cared so they got rid of the reference blower design to prohibit usage in a server environment and they jacked up the price to Quadro levels. An entry level Quadro RTX now costs $2,300. And the 2080 now hilariously costs the same price as a Quadro P4000 entry level card ($800). They couldn't man handle the market that was rejecting their prior price premiums so now they're jawboning it forcefully. This has the handprint of a over-aggressive business dev group all over it and Jensen and the technical staff better reign these jokers in before they severly harm Nvidia's brand and future success.

For the gamer, one must decide if it's worth investing in something that Nvidia will even obsolete in 8 months with 7nm. This is a 1st try at an architecture and its no doubt going to see huge revisions of the SM over time. Few games will support this and its very immature in its capabilities. For professional renders, this is a godsend feature (As they aren't under real-time compute constraints except for convenience in preview). In final render, they can let this thing run for as long as they want to produce a more refined image and it is much faster than CPUs. The problem comes down to the marketing however.. AMD can produce 400 Megarays in its current architecture and has demo'd this in their pro-render real time ray tracing :
It looks just like the thing that Nvidia is touting and the idea that Nvidia was able to somehow do 10 Gigarays vs 400Megarays in a tiny region of an SM seems to be a patent farce. What is more likely is that Nvidia is ridiculously quoting the upsampled results after they do denoising in tensor cores. If this is the case, what balls they have on them.

Overall, I'm disgusted with the lack of decency Nvidia has been exhibiting as of late.
You have the Partner program fiasco.
You have this : https://www.hardocp.com/news/2018/0...ship_for_stance_against_preordering_hardware/
You have this :
https://www.techspot.com/news/72545-nvidia-geforce-eula-change-prevents-data-centers-using.html
I know this is why they nixed blowers on the Geforce cards
You have the ridiculous pricing
You have this ridiculous blackout regarding details
And god knows what else is lurking under these lofty specs

If it turns out that the ray trace cores only do Megarays just like AMD's GPUs and their gigaray quote is from the tensor core upsample, they will have officially lost all decency in my book. So, I'm waiting it out. If this gigaray nonsense is a farce, there's no reason to go w/ them vs AMD and with AMD opening up their software stack and having the same compatibility with Vulkan, it's them who I will invest resources with. Lastly, you can already do ray tracing in current Nvidia GPUs, its just slower. For dev purposes, I'm going to focus on doing just that with Pascal. For gaming, I use Maxwell and have no performance issues. I'll upgrade my gaming rig in 2020 probably when this idiocy comes back down to earth.
Holy Jesus, what a read.... dont feel like upgrading my 1070 any longer. Was considering either an RTX or a marked down 1080ti. May hold off for another year or so after all the dust settles and see what AMD brings to the table and what Nvidia can or cannot do to improve upon things.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
Except we don't, we compare what's on the market currently and is competing for your money today. What do I care how much a 1080 cost two and half years ago? It would be sad if a new generation card wouldn't beat its predecessor's performance for price even if you took it back in a time machine to that card's launch. That's about as a low a bar as you could possibly present.

The 2080 isn't offering a perf/$ increase in the present, its predecessors did.

FWIW, once benchmarks are available I will compare 2080 in PP$ compared to both old gen launch prices and official MSRP price cuts.

So we'll look at
$550 980 compared to both $650 780 (launch) and $500 780 (cut), and $700 780 Ti
$700 1080 compared to $550 980 (launch) and $500 980 (cut), and $650 980 Ti
$800 2080 compared to $700 1080 (launch) and $500 1080 (cut), and $700 1080 Ti

I expect to see the least improvement in performance-per-dollar with the 2080 in aggregate benchmarks. 1080 was pretty disappointing there already at launch tbh, but I think it could be dethroned next month (note: the 1080 didn't regress in PP$ compared to any of those 3 predecessor prices). Ideally my prediction is wrong of course, as the consumers will benefit if $800 2080 is a good buy and improves PP$. But I will be quite pleasantly surprised if 2080 is more than 60% faster than the 1080.
 
  • Like
Reactions: PeterScott

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
I expect to see the least improvement in performance-per-dollar with the 2080 in aggregate benchmarks. 1080 was pretty disappointing there already at launch tbh, but I think it could be dethroned next month (note: the 1080 didn't regress in PP$ compared to any of those 3 predecessor prices). Ideally my prediction is wrong of course, as the consumers will benefit if $800 2080 is a good buy and improves PP$. But I will be quite pleasantly surprised if 2080 is more than 60% faster than the 1080.

That would be optimistic, since NVidias own slides only show about 50% if you exclude DLSS.

IMO this release won't regress perf/$ on older titles, but it won't significantly advance it either.

It's a bit of hiccup on perf/$ because of the large amount of die space devoted to forward looking features.

But this should be a one time blip.

I have vague memories of a similar hiccup when the first shader oriented cards were introduced. Performance/$ on older titles barely moved for that one release. By the next release progress resumed and shader usage was heavier.

I can definitely see people with 1080Ti sitting out this release, or people scooping 1080Ti if some great deals materialize.

Really outside of the 2080Ti Early adopter card, I would be waiting for reviews before purchasing anything right now, even a good deal on a 1080Ti, without seeing what the new cards can do.

IMO the long lead between announcement and review embargos lifting (sept 14?) is terrible, it introduces paralysis. There is not enough info to order a new card, or pickup a good deal on an older card.
 
Last edited:

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
Even if it is the best card you can get, sure because there is no competition, those people probably already have a more than capable gpu like me and many others that like to buy the newest techy things. I always buy new stuff when it comes out but not if it doesn't offer me any benefits.

I'm probably going to buy one, but I want to make sure it's the best and fastest via actual results rather than what I'm told by Nvidia and others, or hearsay. It's common sense but I guess not everybody has that.

I guess the simple answer is people that preordered immediately don't care and are ignorant. They don't know it's the fastest and greatest any more than we do.

I KNOW it's the fastest card you can get for $1200. The specs don't lie badly enough to make anyone with a brain believe this is going to be slower than a 1080 Ti.
 

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
The fastest card (in normal rasterized games) is really an aside. You're buying into the RT and Tensor cores, that's where the $$$ are. They're almost half the die space. So in the end you're buying into the promise of those cores being useful for something in games going forward, games you'll actually play.

Or you just don't care, have disposable income to get whatever the current flagship is in which case the opinion is irrelevant here. It can't be argued.
 

sze5003

Lifer
Aug 18, 2012
14,320
683
126
That's right you are also buying into the fact that rtx will be something to be fleshed out later. By that time we will need the next card. I'm excited for rtx as it will make games more enjoyable for sure but it's not something simple and needs time to be implemented to it's full potential.

I don't think anyone disagrees that the 2080ti is faster/better than the 1080ti. I'm just curious by how much and what the prospect of trying to run rtx enabled on something like 1440p will look like. Which is why I'm waiting for reviews. This will flesh out for me if I keep my 1440p display or go up to 4k.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136

That was good.

Tom's did update the article today. The article originally sounded like a drunken rage.

Editor's Note (8/25): I've made a few changes to the original copy of this story to clarify and clearly express my view that reading independent reviews of any new product (especially a pricey GPU) is generally a good idea. I've also added the disclaimer above to make it obvious to everyone that this is an opinion piece (one of a pair of articles taking different sides on a hot button issue), not official buying advice from the entire team at Tom's Hardware
 

alcoholbob

Diamond Member
May 24, 2005
6,390
470
126
DldNZAdV4AEhinn.jpg


No context perf numbers of GTX 2080...

1521489894oha215o50c_8_1.png


I can't imagine it was benched at Ultra settings though, that would imply it's 35% faster than 1080 Ti...
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Probably was, but using DLSS.

If you look at the 1080 vs 2080 gaps out there, turning that on moves it from ~1080ti performance to much faster.

The question is then how often games will support DLSS & how well it works in s graphical sense when used.
 
  • Like
Reactions: cmdrdredd

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
Given the die size of the thing it damn well better be faster than the ti, without any special work needed by developers, and not under special conditions or it is an epic fail. By the looks of it, there was just a little renaming trickery and the 2080 is the replacement of the 1080ti, the 2080ti the replacement for the titan whatever. We'll see but that could very well be their strategy to make it appear there was a big increase in performance at the high end level.
 
Last edited:
  • Like
Reactions: cmdrdredd

TheF34RChannel

Senior member
May 18, 2017
786
310
136
Also a thing I think we seeing for first time since OG Titan is we get a real Titan in 2080ti 50 dollar cheaper then last Gen. I suspect there will be no Titan this generation. If I am right this is a steal. Disclaimer this is wild speculation on my part, if you read into it and get triggered please un hook keyboard and don't reply.

They may churn out a Turing Titan for prosumers. Or wait for 7nm, I can see them do that as well.

IMO the long lead between announcement and review embargos lifting (sept 14?) is terrible, it introduces paralysis. There is not enough info to order a new card, or pickup a good deal on an older card.

Don't you just hate paper launches? I certainly do. As you said, the limbo it creates isn't helping anyone. They should have been available at launch day.

Side note: I've never been a Titan buyer. Too expensive for my taste ad in the past too short lived with a Ti varient right behind it. However, there doesn't seem to come anything between the 2080 and Ti before 7nm.

I'm more of a Ti guy (without a Ti at the moment ha ha). That would put me in the 2080 market, however I find the gap between it and the Ti too large - or rather, the former specced too low as compared to the big daddy. There may not be enough there to entice me (come reviews).

I am particularly interested in DLSS rather than RT as I think the latter will be much better to execute with the next generation or the one after that.
 
  • Like
Reactions: cmdrdredd

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Don't you just hate paper launches? I certainly do. As you said, the limbo it creates isn't helping anyone. They should have been available at launch day.

Side note: I've never been a Titan buyer. Too expensive for my taste ad in the past too short lived with a Ti varient right behind it. However, there doesn't seem to come anything between the 2080 and Ti before 7nm.

I'm more of a Ti guy (without a Ti at the moment ha ha). That would put me in the 2080 market, however I find the gap between it and the Ti too large - or rather, the former specced too low as compared to the big daddy. There may not be enough there to entice me (come reviews).

I am particularly interested in DLSS rather than RT as I think the latter will be much better to execute with the next generation or the one after that.

I am OK with having a one month delay till sales, as long as they put them in the hands of reviewers early and lifted the review embargo quickly. Then there would be enough info to make a decision about either ordering one of the new ones, or scooping a deal on the last gen.

DLSS is also the thing I'm most interested seeing getting detailed reviews on, with image quality comparisons to other AA methods. DLSS is negligible developer work to implement, so it should get widespread usage very quick, if it is good. And if it is really good, it could be game changing for AA users.
 
  • Like
Reactions: TheF34RChannel

majord

Senior member
Jul 26, 2015
509
711
136
The problem here is that he wasn't bashing their product. He stated something pretty straight forward and sound. To go and terminate his sponsorship is one thing... To be petty and ask for your goofy gear back is a whole other level. This is what happens when you have too many fratboi chads in your business/marketing team who think they can treat anyone however they like because their fee-fees got hurt. This is not how you run a business in todays age where these locker room level stunts make it to the front page and cast a big negative shadow on you as a company.

I'll come right out and say it... I was in the midst of getting an application processed for one of their programs and I'm sort of glad I didn't get it. I don't like the feeling of being censored or restricted especially when the opinions/ideas are grounded and reasonable. As a result of observing this fiasco and others, I'm going to do everything I can possibly do to have zero contracts with hardware vendors. I have after-all bought all of my hardware with my own dime up until now w/ zero support.

I'm currently having a discussion with other informed individuals on a different platform and the consensus is the following :

They achieve this functionality via a hybrid overlay solution / Hybrid pipeline.
Tensor cores and ray trace cores are packed into the SM. Ray trace cores have been placed where the prior double precision floating point logic was placed. They generate a BVH data structure and use it to guide traditonal rasterization and also use it to feed the ray trace cores in parallel. The ray trace cores calculate a series of intersection and produce a quite noisy image. This noisy image is 'overlayed' over the traditional rasterizer pipeline output and then tensor cores are used to do "AI - meme learning based denoising' : https://research.nvidia.com/sites/default/files/publications/dnn_denoise_author.pdf.
sigg17_denoiser_001.jpg

The penalty hit comes from the extension of the graphics pipeline to a new process conducted by the tensor core denoising process. The ray trace core calculations and results happen in parallel w/ the rasterizer pipeline and the result is quite noisy and ugly. The real magic happens by fusing this to the rasterized image in the tensor cores. When not used to denoise the ray tracing output, the tensor cores are restructured to support DLSS.

There's issues w/ pixel flicker and noise still present in the image. There's also issues with ghosting in the case that they don't do per frame ray trace rendering and there's 'hold over' shadows. They'll zero this in over time. The Star Wars demo had a 45ms per frame render time using the ray trace cores. This is somewhere around 22FPS. So, you either decide you want higher quality ray trace results and lower FPS or higher FPS and lower quality/non-per frame ray trace results. I could imagine a slider for adjusting this functionality.

The reason there is a lack of details and benchmarks is because the performance is going to be all over the place and the details as to how this works is quite complicated. I consider this to be a beta level dev board for gaming applications integrated into 1080ti. The speedups will come from GDDR6, a transition from14nm to 12nm, and the architectural changes they made to the caching structure to allow for a hybrid ray trace/rasterizer pipeline. When you turn RTX off, the traditional GPU pipeline probably gains access to a larger cache space that would otherwise have been dedicated to the ray trace pipeline. You will get a performance hit in FPS when you turn ray tracing on because the graphics pipeline has new stages regarding the "AI denoising" and upsampling of the low res ray tracing results. You cannot use raytracing w/o denoising them w/ the tensor cores. While the ray trace Op runs in parallel w/ the rasterizer pipeline, the post processing denoising does not.

I'm torn whether or not to purchase one of these cards and dedicate resources to evaluating its functionality. I feel there will be some time before its supported in Vulkan and I don't feel CUDA 10 will be released immediately. I also feel there are a good amount of things they're going to disable from the Quadro variants.

On pricing, it's simple... A 2080 costs as much as an entry level Quadro now. Nvidia was pissed because people were using 1080ti FE's in the data center and in rendering farms, so they changed the EULA agreement to combat it. No one cared so they got rid of the reference blower design to prohibit usage in a server environment and they jacked up the price to Quadro levels. An entry level Quadro RTX now costs $2,300. And the 2080 now hilariously costs the same price as a Quadro P4000 entry level card ($800). They couldn't man handle the market that was rejecting their prior price premiums so now they're jawboning it forcefully. This has the handprint of a over-aggressive business dev group all over it and Jensen and the technical staff better reign these jokers in before they severly harm Nvidia's brand and future success.

For the gamer, one must decide if it's worth investing in something that Nvidia will even obsolete in 8 months with 7nm. This is a 1st try at an architecture and its no doubt going to see huge revisions of the SM over time. Few games will support this and its very immature in its capabilities. For professional renders, this is a godsend feature (As they aren't under real-time compute constraints except for convenience in preview). In final render, they can let this thing run for as long as they want to produce a more refined image and it is much faster than CPUs. The problem comes down to the marketing however.. AMD can produce 400 Megarays in its current architecture and has demo'd this in their pro-render real time ray tracing :
It looks just like the thing that Nvidia is touting and the idea that Nvidia was able to somehow do 10 Gigarays vs 400Megarays in a tiny region of an SM seems to be a patent farce. What is more likely is that Nvidia is ridiculously quoting the upsampled results after they do denoising in tensor cores. If this is the case, what balls they have on them.

Overall, I'm disgusted with the lack of decency Nvidia has been exhibiting as of late.
You have the Partner program fiasco.
You have this : https://www.hardocp.com/news/2018/0...ship_for_stance_against_preordering_hardware/
You have this :
https://www.techspot.com/news/72545-nvidia-geforce-eula-change-prevents-data-centers-using.html
I know this is why they nixed blowers on the Geforce cards
You have the ridiculous pricing
You have this ridiculous blackout regarding details
And god knows what else is lurking under these lofty specs

If it turns out that the ray trace cores only do Megarays just like AMD's GPUs and their gigaray quote is from the tensor core upsample, they will have officially lost all decency in my book. So, I'm waiting it out. If this gigaray nonsense is a farce, there's no reason to go w/ them vs AMD and with AMD opening up their software stack and having the same compatibility with Vulkan, it's them who I will invest resources with. Lastly, you can already do ray tracing in current Nvidia GPUs, its just slower. For dev purposes, I'm going to focus on doing just that with Pascal. For gaming, I use Maxwell and have no performance issues. I'll upgrade my gaming rig in 2020 probably when this idiocy comes back down to earth.


good read thanks.

Applogies if it's already been discussed, / taken into account, but the other thing I guess to consider is these GPU's are Power limited as always, and as such, using RT /TS cores to do this , you are taking power budget away from traditional CUDA cores for rasteration.. tldr (I think) you'd be looking at lower clockspeeds when operating in this manor which would imply you could never theoretically achieve same FPS
 
Status
Not open for further replies.