Nvidia ,Rtx2080ti,2080,(2070 review is now live!) information thread. Reviews and prices

Page 47 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

n0x1ous

Platinum Member
Sep 9, 2010
2,572
248
106
Long time member of the forums here. I have stalked them way before I started participating.

I have actively avoided buying 4K TV's for my house and instead have 1080P ones. They require less hardware to stream, use less bandwidth and look indistinguishable from my viewing distance.

I have (had) an overclocked 2k display that I bought from Korea because it was an awesome deal for the price and I didn't want to spend huge amounts on a top end/barely 4k capable video card every year.

Unless I can double my income to at least 200k or more a year I fail to see how spending more than this is worth it. I would rather turn down the resolution and put an extra 5k a year into investments, go for a holiday etc.

Maybe you earn 250k plus a year and the price difference is not a big deal. Maybe you don't like holidays or have kids. Maybe you earn minimum wage and live with your parents so you can spend money on PC parts. I don't really care what category you fit in, but please don't tell me I should get out of this forum just because I refuse to spend silly money on a hobby.

Wasn’t directed at you. You can make value choices and still be interested in the tech. You aren’t telling everyone who does value the top end stuff that their an idiot and just duped by marketing like the person I was responding to.
Cheers
 
  • Like
Reactions: Elfear

TestKing123

Senior member
Sep 9, 2007
204
15
81
[H]'s replacement 2080TI (Samsung RAM) failed. That's two out of three cards: https://www.hardocp.com/article/2018/11/21/rtx_2080_ti_fe_escapes_testing_by_dying_after_8_hours/


Ray tracing is fake as well. It's an approximation of light projected onto a device with discrete pixels. Real life doesn't work on pixels. What's your point exactly?

My point is that you don't have a clue what you're talking about. You're using an example for a projection method used for mirror effects 20 years ago as if it's comparable to real time ray tracing. The fact that you don't know the difference (or why it's no longer used since games began using real time dynamic lighting) is just as laughable.


And where'd the bridge go with RTX "Ultra"? https://youtu.be/jaUP4LucmZM?t=929

What does this have to do with you not understanding the difference between that Unreal 1 screenshot and real time ray tracing?

Unreal 1 isn't open source. Oldunreal has exclusive rights to work on unofficial patches but they're not allowed to distribute the code.

You know you can download any version of Unreal engine to work with right? Including from back from Unreal/Unreal Tournament 1? If you did, you'd understand exactly why that comparison you did and the remark you made was pure ignorance.

Better yet, why don't we see fully mirrored projections ON ANY UNREAL ENGINE GAME, like that in Unreal 1, since.... Unreal 1? You think it MIGHT have something to do with not being able to actually reproject dynamic lighting in real time? Which was MY point all along, it's easy to reproject a camera onto a surface like a wall, floor or ceiling since that's all it is, a projection with ZERO calculation for real time lighting, because it didn't exist at the time. Which is why you never see true "mirrors" in games since realtime dynamic lighting and shadows became common, and any "mirror" you see is some clever variation of a projection or portal placement.

None of the projection methods used can be used for say, reflections of dynamic surfaces, curved surfaces, etc.. in such the same way ray tracing allows.

No, I'm saying RTX is the most garbage implementation of ray tracing possible, and looks worse than approximated reflections done 20 years ago on a $99 Voodoo3. A noisy ugly slideshow running on a $1200 graphics card.

Your ignorance is glaring, see my previous comment.

Tell you what, why don't you download Unreal Engine 4 and give us a demo using that 20 year old projection technology on one of the built in demo levels. The code base for that reprojection is still there if you want to use it. I'd sure like to see you do it yourself on any of the demo levels with real time dynamic lighting. Something that big companies, even Epic themselves, couldn't do.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,997
126
My point is that you don't have a clue what you're talking about. You're using an example for a projection method used for mirror effects 20 years ago as if it's comparable to real time ray tracing.
It's comparable because it too reflects pixels. Reflection is nothing more than a copy of certain pixels rendered in the right place.

The fact that you don't know the difference (or why it's no longer used since games began using real time dynamic lighting) is just as laughable.
You mean like Doom 3?

TA7UZtP4ameEIgB4vlvNGPu_pYWID2dwNtvfaeQok-U.jpg

Doom 3 had a real-time unified lighting and shadowing system. Are you claiming otherwise?

Better yet, why don't we see fully mirrored projections ON ANY UNREAL ENGINE GAME, like that in Unreal 1, since.... Unreal 1? You think it MIGHT have something to do with not being able to actually reproject dynamic lighting in real time? Which was MY point all along, it's easy to reproject a camera onto a surface like a wall, floor or ceiling since that's all it is, a projection with ZERO calculation for real time lighting, because it didn't exist at the time.
Duke Nukem Forever (2011): https://www.youtube.com/watch?v=E5O0BnVzmEc
Life is Strange (2015) https://www.playstationtrophies.org/game/life-is-strange/trophy/104088-Pinholed.html

Both are newer Unreal engine games. Are you claiming they have no dynamic lighting?

Which is why you never see true "mirrors" in games since realtime dynamic lighting and shadows became common, and any "mirror" you see is some clever variation of a projection or portal placement.
I never said it was "true mirrors", I said it was a really good approximation that runs fast and looks good.

You're the one who repeatedly claimed games with dynamic lights don't have reflections like Unreal 1. The three examples above proved you wrong.

Much better than the ugly noisy mud puddles that $1200 Turding - Space Invaders Edition(tm) produces @ 75% performance loss.

And the unplayable DX12 stuttering which has never been resolved in numerous titles, and never will be. A fact repeatedly ignored by people who pimp RTX.

Your ignorance is glaring, see my previous comment.
Cut the personal insults or you'll be reported for trolling.
 
Last edited:

TestKing123

Senior member
Sep 9, 2007
204
15
81
It's comparable because it too reflects pixels. Reflection is nothing more than a copy of certain pixels rendered in the right place.


You mean like Doom 3?


Doom 3 had a real-time unified lighting and shadowing system. Are you claiming otherwise?


Duke Nukem Forever (2011): https://www.youtube.com/watch?v=E5O0BnVzmEc
Life is Strange (2015) https://www.playstationtrophies.org/game/life-is-strange/trophy/104088-Pinholed.html

Both are newer Unreal engine games. Are you claiming they have no dynamic lighting?


I never said it was "true mirrors", I said it was a really good approximation that runs fast and looks good.

You're the one who repeatedly claimed games with dynamic lights don't have reflections like Unreal 1. The three examples above proved you wrong.

Much better than the ugly noisy mud puddles that $1200 Turding - Space Invaders Edition(tm) produces @ 75% performance loss.

And the unplayable DX12 stuttering which has never been resolved in numerous titles, and never will be. A fact repeatedly ignored by people who pimp RTX.


Cut the personal insults or you'll be reported for trolling.


Sorry, but you have to be completely daft to think the mirror tricks employed in those game screenshots are comparable to any form of real time ray tracing. The fact that you keep stating "it too reflects pixels" when NO REFLECTION CALCULATION is going on reinforces this baffling ignorance makes me wonder if you're the one actually trolling here.

The DNF screenshot reinforces exactly what I said before. That is not a real time reflection, it's a projection. In fact there is not even any dynamic lighting in that scenario, everything is baked, which makes reprojection for mirrors all the more easier. Let me give you a quick lesson on mirrors and Unreal Engine, taken from the documentation itself.

"First you need to make your material, which should look like this:

Now if you try and put this material on an object and toss it in your world, it won't do reflections properly. This is because in video games it would be very costly to try and do reflections by ray-tracing. So instead we use what are called "Reflection Capture" elements. A "Reflection Capture" element is something that captures an image of the screen so that you can use it in your reflections, and allows the game to run much smoother while still using reflections in real time. Unreal Engine 4 has a very simple Reflection Capture system."

Do you understand that? Let me repeat....Do you understand that? You need to take a static image of the surrounding to use as a basis for the reflection element since the engine can't actually do it in real time (ray tracing).

Better yet, I'm done doing your research. You cite me one reference where actual reflections are calculated in real time on any iteration of Unreal Engine.

Also, I like your Doom 3 reference. Why? Because that cinematic jump scare scene required a complex hidden room build to do all the effects required for that scene. That's why that is the only section in Doom 3 that has mirrors. In fact, this is a common tactic used in various other games to mimick a "reflection" in recent games. Here's an explanation:

"Game developers have used all kinds of wild tricks over the years to simulate stuff like this. In games with mirrors, developers have created entire rooms behind the mirrors, in which a double is rendered and moves with the same inputs that our character does, just flipped, so that it looks like a reflection, but really it’s a mindless double, like some horrifying Twilight Zone horror movie nightmare.


With ray tracing, a lot of this goes away. You tell that table in the room how much light to absorb and reflect. You make the glass a reflective surface. When you move up to the window, you might be able to see your character if there’s enough backlighting."

https://www.technobuffalo.com/2018/09/06/why-ray-tracing-in-games-is-huge/

Listen, it's one thing to try and dismiss RTX, but using these examples are not the way to do it because they're fake effects and very limited in thier application. None of these projection methods can reflect light off a vase for example. So yeah, trying to bring up "20 year old tech" that "can do it" when it clearly can't I'm gonna call you out on it.

In fact, go ahead and make a thread on Unreal Engine, Unity, or any other engine forum and state how these mirror effects are just as good as real time ray tracing and that old pentium hardware has been doing them for 20 years. Let's see the responses you get.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Ray tracing: the technology nobody asked for. For the price nobody wants to pay.
I think most graphics people have been wanting real time ray tracing for ever. You might have not asked for it directly but I bet you've wanted hollywood animated movie visuals in your game - which requires ray tracing.

Try not to get your hate of Nvidia's latest cards mixed up with what is the next big step forward for real time graphics.
 
  • Like
Reactions: godihatework
Mar 11, 2004
23,356
5,782
146
Sorry, but you have to be completely daft to think the mirror tricks employed in those game screenshots are comparable to any form of real time ray tracing. The fact that you keep stating "it too reflects pixels" when NO REFLECTION CALCULATION is going on reinforces this baffling ignorance makes me wonder if you're the one actually trolling here.

The DNF screenshot reinforces exactly what I said before. That is not a real time reflection, it's a projection. In fact there is not even any dynamic lighting in that scenario, everything is baked, which makes reprojection for mirrors all the more easier. Let me give you a quick lesson on mirrors and Unreal Engine, taken from the documentation itself.

"First you need to make your material, which should look like this:

Now if you try and put this material on an object and toss it in your world, it won't do reflections properly. This is because in video games it would be very costly to try and do reflections by ray-tracing. So instead we use what are called "Reflection Capture" elements. A "Reflection Capture" element is something that captures an image of the screen so that you can use it in your reflections, and allows the game to run much smoother while still using reflections in real time. Unreal Engine 4 has a very simple Reflection Capture system."

Do you understand that? Let me repeat....Do you understand that? You need to take a static image of the surrounding to use as a basis for the reflection element since the engine can't actually do it in real time (ray tracing).

Better yet, I'm done doing your research. You cite me one reference where actual reflections are calculated in real time on any iteration of Unreal Engine.

Also, I like your Doom 3 reference. Why? Because that cinematic jump scare scene required a complex hidden room build to do all the effects required for that scene. That's why that is the only section in Doom 3 that has mirrors. In fact, this is a common tactic used in various other games to mimick a "reflection" in recent games. Here's an explanation:

"Game developers have used all kinds of wild tricks over the years to simulate stuff like this. In games with mirrors, developers have created entire rooms behind the mirrors, in which a double is rendered and moves with the same inputs that our character does, just flipped, so that it looks like a reflection, but really it’s a mindless double, like some horrifying Twilight Zone horror movie nightmare.


With ray tracing, a lot of this goes away. You tell that table in the room how much light to absorb and reflect. You make the glass a reflective surface. When you move up to the window, you might be able to see your character if there’s enough backlighting."

https://www.technobuffalo.com/2018/09/06/why-ray-tracing-in-games-is-huge/

Listen, it's one thing to try and dismiss RTX, but using these examples are not the way to do it because they're fake effects and very limited in thier application. None of these projection methods can reflect light off a vase for example. So yeah, trying to bring up "20 year old tech" that "can do it" when it clearly can't I'm gonna call you out on it.

In fact, go ahead and make a thread on Unreal Engine, Unity, or any other engine forum and state how these mirror effects are just as good as real time ray tracing and that old pentium hardware has been doing them for 20 years. Let's see the responses you get.

And yet again you're clearly intentionally refusing to understand his point, which is that, with tricks, developers have been able to offer a good enough approximation for a lot of effects, that don't kill performance or require specialized hardware to offer.

He never said any of those things, and he already explicitly told you exactly that so your whole diatribe was pointless. He said the tricks they used were a good enough approximation, and that they've been able to do it for years already, and it didn't kill performance. So far, ray tracing hasn't offered basically any of the magical graphics effects people keep promising it can, in games, and on top of that, just to enable this half-baked version, it requires exceptionally expensive video cards, and even then, the performance is atrocious (so if you want to actually play the games you're going to be disabling this feature, doubly so since every review of it has said that unless you basically stop to smell the roses you'll rarely if ever even notice the ray-tracing stuff).

I'm going to make a prediction: ray tracing in contemporary games on consumer hardware is going to be a flop. There won't be enough real support for it, and the cost of the hardware to do it well will be high enough for not enough benefit, that gamers will shun it. By the time the support and hardware is there (I give it a decade), I think game streaming will have become the de facto gaming setup. And I think that is where Nvidia should have been pushing this anyway. Push it on the developer side, and try to sell their Tesla rendering boxes to AAA developers/publishers as a way to entice gamers to game streaming services. The big publishers have the money for the hardware, and the developers would be able to do engines specifically for that (so build it from the ground up for ray-tracing support). I actually think that would be better for all involved. Considering Nvidia is already wanting them to do "Deep Learning" inferencing that way, they should have just pushed the ray-tracing rendering there as well.

I mean, if I were Nvidia, I'd be going to Blizzard and going "Hey you know those pre-rendered cutscenes for WoW? How about WoW 2 look like those, plus since you're already so focused on server side control, its a natural fit, and this way the monthly subscription pays for the hardware too so you can sell it to gamers as free major hardware upgrades without them having to spend any more than the normal subscription."

I find it odd how some people are really trying to claim that ray-tracing will actually speed up development, and even make games less buggy. The lighting stuff is typically not a major area of bugs in games (that anyone I know really complains about unless its completely screwed up, like happened in parts of I think that Alien Colonial Marines game), and if anything, this supposed hyper-realism will actually shine a spotlight on the glaring deficiencies in other aspects of the games. And since the framerates will be low, you'll basically be forced to stop and smell the roses, further highlighting the visual issues.

Personally, I think art direction is the single biggest factor in visuals in games, and that's not only not going to change due to ray-tracing, its actually going to become a bigger issue as its going to highlight the art assets even more. And from what I've gathered, that's actually already the biggest issue with game development time is that it takes a long time to create all the assets. That was an area that Nintendo singled out was a major issue for them, is that because of the assets needed for FullHD and higher, that it increased game development times. That's why they started lagging behind in hardware specs as they knew they couldn't compete at that pace. It also was basically why the Wii U failed, because the game development was being elongated they weren't able to time releases, leading to long periods of no big name titles.

Making ray-tracing quality assets is going to take even longer. And its going to be made worse because the games aren't made with ray-tracing in mind, so we're gonna end up with a hodge podge of some really high quality assets and then some glaringly poor ones (an example are the Gran Turismo games, where the first PS3 one had like 100 cars that were made to proper PS3 asset level, but most of the cars in the game weren't - I think they used a bunch almost straight from GT4 - so you'd end up with glaring inconsistency in the quality). Which that's already an issue some, but I think ray-tracing is going to make the difference even more disparate, and also highlight it even more. I remember Turn 10 (the Forza developer) talking about how long it takes them to make 1080p assets (which was why the first Forza on Xbox One had so few cars and tracks). 4K is another step, and then ray-tracing is yet another.
 
Mar 11, 2004
23,356
5,782
146
I think most graphics people have been wanting real time ray tracing for ever. You might have not asked for it directly but I bet you've wanted hollywood animated movie visuals in your game - which requires ray tracing.

Try not to get your hate of Nvidia's latest cards mixed up with what is the next big step forward for real time graphics.

You're right in that a lot of people have been wishing for it.

We'll see, I think a lot of the ray-tracing quality gets lost in those movies because of the art direction (I'd even say for some of them, the attempt for the realistic ray-traced stuff actually hurts the art direction), so personally I don't think we really need ray-tracing as art direction matters most of all. I also don't think we're at a place where it actually is feasible, its just that Nvidia saw an opportunity to try and spin a pro-feature to gamers as a reason to buy their large expensive GPUs.

Even in most of the "best case scenario" images that people tout ray-tracing for, I just don't get it. And in my experience its still a very "ooh shiny!" bling visual thing which doesn't do much for me as it makes things look abnormally shiny, so its kinda like the weird plastic shiny skin that we saw in the mid-2000s (and also Nintendo has adopted this weird overly shiny appearance to the Mario games that I find off-putting).

I do find it funny how we went through like a really drab washed out "gritty realism" look, to now its like they're trying to do the total opposite and blind us with reflection and highlights (HDR).

I'm sure ray-tracing will bring real improvement, but its gonna take a decade before it really becomes useful, and who knows how long before they stop just trying to shove in as much shiny reflective lighting as possible. Its like they want games to have Star Wars prequel moment, where its all about flash over substance.
 

TestKing123

Senior member
Sep 9, 2007
204
15
81
And yet again you're clearly intentionally refusing to understand his point, which is that, with tricks, developers have been able to offer a good enough approximation for a lot of effects, that don't kill performance or require specialized hardware to offer.

And you are intentionally refusing to understand my point, those "tricks" like the mirror projection from Unreal 1 aren't used anymore because they don't work as well as they do back when things were simpler. Thus the rendering technology needs to evolve, otherwise you should be perfectly happy with 1998 graphics while the rest of the world moves on.

Straight mirror projections aren't impressive anymore. What is impressive is accurate real time reflections off of curved surfaces, partial transparency and translucency through a window pane as you would see in real life, light bouncing off multiple surfaces scattering and interacting with each other. NOTHING in those screenshots are remotely this complex or capable with the simple 20 year old technologies involved. THAT is my point.

He never said any of those things, and he already explicitly told you exactly that so your whole diatribe was pointless. He said the tricks they used were a good enough approximation, and that they've been able to do it for years already, and it didn't kill performance. So far, ray tracing hasn't offered basically any of the magical graphics effects people keep promising it can, in games, and on top of that, just to enable this half-baked version, it requires exceptionally expensive video cards, and even then, the performance is atrocious (so if you want to actually play the games you're going to be disabling this feature, doubly so since every review of it has said that unless you basically stop to smell the roses you'll rarely if ever even notice the ray-tracing stuff).

Don't like it? Turn it off. The 2080ti is a powerful enough graphics card that's 35 -40% faster than a 1080Ti at 4k. The technology starts somewhere and like it or not, that's where gaming graphics are heading and no amount of 'approximation" is just "good enough" unless you enjoy stagnant 20 year old simple mirror projections that have no place in games today.
 

Pandamonia

Senior member
Jun 13, 2013
433
49
91
I think most graphics people have been wanting real time ray tracing for ever. You might have not asked for it directly but I bet you've wanted hollywood animated movie visuals in your game - which requires ray tracing.

Try not to get your hate of Nvidia's latest cards mixed up with what is the next big step forward for real time graphics.
It's not the next big thing in graphics and it will never become possible with current transistor based silicone processors.

Currently it cuts frames by 66% and doesn't even look that good.

Moores law has basically stopped die shrinking and now we are at the point of 5 year cycles on a real node shrink. Gpus are becoming massive and the cost to match.

So if a £1500 gpu produces 33% of the required performance at current node and the next node gives 25% more performance at the same power. That is £4500 at current frames which is £3375 on 7nm. So in 5 years time with the same improvement you can expect £2571 for a further 25% improvement in RT. So in 2 new nodes you are getting 56% more frames in ray tracing than the current 33% of current frame rate. Which means that you are getting 51% of current normal fps in 2 node shrinks.

The beauty of gpu silicone is that it scales very well and performance is pretty linear so unless a miracle new arch comes out explain to me how this is going anywhere no technogoly has ever been maintain in the pc space by the elite hardware also if it is not mainstream it dies
 

alcoholbob

Diamond Member
May 24, 2005
6,338
404
126
The longer a node lasts, the cheaper the transistor cost should get as research costs get amortized. I suspect eventually GPUs will have stacked dies and go the same way as HBM, so you can start having die sizes > 1000 mm2 once a node gets mature enough, then you can make bigger and bigger chips. TSMC was working on stacked wafers a while ago.
 
  • Like
Reactions: ozzy702

Pandamonia

Senior member
Jun 13, 2013
433
49
91
The longer a node lasts, the cheaper the transistor cost should get as research costs get amortized. I suspect eventually GPUs will have stacked dies and go the same way as HBM, so you can start having die sizes > 1000 mm2 once a node gets mature enough, then you can make bigger and bigger chips. TSMC was working on stacked wafers a while ago.
Maybe but look at the heat problem and power use.

This isn't memory we are talking about here.
 

Mopetar

Diamond Member
Jan 31, 2011
8,155
6,871
136
I suspect eventually GPUs will have stacked dies and go the same way as HBM, so you can start having die sizes > 1000 mm2 once a node gets mature enough, then you can make bigger and bigger chips. TSMC was working on stacked wafers a while ago.

That's a long way off and GPUs and memory are fundamentally different products. The heat from a stacked GPU would be a nightmare to dissipate. Also games typically get better results from fewer SPs running at higher clock rates, there's not a lot of added benefit for the gaming market over making a simpler design and increasing the clock speeds.
 

maddie

Diamond Member
Jul 18, 2010
5,002
5,187
136
That's a long way off and GPUs and memory are fundamentally different products. The heat from a stacked GPU would be a nightmare to dissipate. Also games typically get better results from fewer SPs running at higher clock rates, there's not a lot of added benefit for the gaming market over making a simpler design and increasing the clock speeds.
Where is this reasoning coming from?

AFAIK, for a given power budget, the highest performance comes from a lower clocked wider GPU than a higher clocked narrow one. The obvious limitation is that the designs must be within the same design family. Comparing across designs for narrow vs wide is futile, as too many other variables exist.
 

Mopetar

Diamond Member
Jan 31, 2011
8,155
6,871
136
Where is this reasoning coming from?

In a general sense, it's just Amdahl's law.

You could see this in real life with AMD cards when they started getting huge shader counts around the time of Fury. Quite often, games couldn't utilize all of them at once, which meant that the performance of the card couldn't be fully utilized.

If you could trade % clock increase for % shader count increase, you'd almost always take the higher clocks. The only time you might not would be if you're running at higher resolutions.
 

maddie

Diamond Member
Jul 18, 2010
5,002
5,187
136
In a general sense, it's just Amdahl's law.

You could see this in real life with AMD cards when they started getting huge shader counts around the time of Fury. Quite often, games couldn't utilize all of them at once, which meant that the performance of the card couldn't be fully utilized.

If you could trade % clock increase for % shader count increase, you'd almost always take the higher clocks. The only time you might not would be if you're running at higher resolutions.
I would disagree.

Amdahl's law dasically shows the influence of serial code percentage as a limit to performance improvement by parallel computation. A mere 1920x1080 screen has 2,073,600 pixels that has to be processed. All of this can be done in parallel. We are not even close to this.

The use of AMD GPUs as an example is not useful as there are many other factors affecting throughput. Memory performance, vectorization, ROPs, etc, to draw any useful information. Shader efficiency does drop, but why is the question, and I believe that the use of Amdahl's Law in this situation to explain it, is flawed.
 
  • Like
Reactions: Headfoot

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
Maybe but look at the heat problem and power use.

This isn't memory we are talking about here.

Lower clocks mitigate a lot of the heat and power use. Stacked 7nm could easily be 3x the 2080ti with similar heat and power use with the right technology and design. Yes, absolutely expensive and we're probably 3 years off from seeing it, but it's very possible, and it would be sufficient to push realtime ray tracing appropriately since at that point more of the die could be devoted to RT cores.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
8,155
6,871
136
Shader efficiency does drop, but why is the question, and I believe that the use of Amdahl's Law in this situation to explain it, is flawed.

Amdahl's law isn't the exact explanation, but it does touch on the issue. GPUs work because a lot of the work they need to do is embarrassingly parallel, but not everything is that way in games.

Even if it is theoretically possible to implement massively parallelism, there's no indication that it will be easy. History has shown that game developers aren't particularly good at utilizing vastly more resources. The problem is unlikely to get solved if the baseline is fewer resources at higher clock-speeds which is what developers will target.

Lower clocks mitigate a lot of the heat and power use. Stacked 7nm could easily be 3x the 2080ti with similar heat and power use with the right technology and design. Yes, absolutely expensive and we're probably 3 years off from seeing it, but it's very possible, and it would be sufficient to push realtime ray tracing appropriately since at that point more of the die could be devoted to RT cores.

3 years is way too optimistic. It's so optimistic that I'd even give you favorable odds on a wager because it has so little chance of occurring. You'd not just need to implement the designs and technology necessary to support a triple-layer GPU, but you'd need a vastly different cooling system to extract heat.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Amdahl's law isn't the exact explanation, but it does touch on the issue. GPUs work because a lot of the work they need to do is embarrassingly parallel, but not everything is that way in games.

Even if it is theoretically possible to implement massively parallelism, there's no indication that it will be easy. History has shown that game developers aren't particularly good at utilizing vastly more resources. The problem is unlikely to get solved if the baseline is fewer resources at higher clock-speeds which is what developers will target.
IMO Amdahl's law is not properly applied here. Amdahl's law would apply to the entire app - input, physics, game logic, networking, rendering, etc. That's where you find the serial pieces which would "weigh down the average" so to speak of parallel vs serial tasks. Rendering is the big task that is largely parallelizable (so-called Embarrassingly Parallel) which brings that average back up. It doesn't make sense to apply Amdahl's law to just a piece of a larger program when the whole theory is that the pieces increasingly weigh down the whole. TLDR: point Amdahl's at the rendering is the wrong scope. Besides, it's just a general principle, there is a world of depth lying beneath that abstraction.

More specific to the topic at hand, I wonder how latency sensitive the BVH fixed function hardware is... is SLI or multi-die fabric a possibility for future cards? Could nVidia or AMD do something like Rome's IO Die and tie together a "co-processor" style BVH die? That would allow them to uncouple the development and take a step back from the colossal 750mm2 die
 

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
Amdahl's law isn't the exact explanation, but it does touch on the issue. GPUs work because a lot of the work they need to do is embarrassingly parallel, but not everything is that way in games.

Even if it is theoretically possible to implement massively parallelism, there's no indication that it will be easy. History has shown that game developers aren't particularly good at utilizing vastly more resources. The problem is unlikely to get solved if the baseline is fewer resources at higher clock-speeds which is what developers will target.

3 years is way too optimistic. It's so optimistic that I'd even give you favorable odds on a wager because it has so little chance of occurring. You'd not just need to implement the designs and technology necessary to support a triple-layer GPU, but you'd need a vastly different cooling system to extract heat.

It wouldn't need to be triple layer. Dual layer, large 7nm GPU would be 3x the 2080TI. They'd probably need to go really wide bus on the fastest GDDR6 available by then, but it could be done.

NVIDIA and AMD both know that die shrinks are essentially a thing of the past. Both companies are looking for ways to throw more transistors into "one" GPU, even if that means multiple GPUs operating as one or stacked GPUs, etc.

I'd honestly be surprised if we don't see some kind of implementation within the next 3 years, even if that only takes place on $3000+ Quadro cards.
 

Mopetar

Diamond Member
Jan 31, 2011
8,155
6,871
136
IMO Amdahl's law is not properly applied here. . .

Again, it's not an exact, perfect explanation, but the idea behind it fits and it seems to track reality as observed. Developers face constraints that make it harder for them to utilize four times as many shaders at half speed, as opposed to doubling the speed of what currently exists. Look at how SLI/Crossfire support has dropped off over the years and tell me that there are no hurdles beyond simply being able to produce this much wider chip.

There was a lot of speculation that Navi was supposed to use the same type of design that we're seeing with Rome, and they were asked about that. I recall one answer where a developer indicated that although the approach worked fine for compute, it didn't work out as well with gaming. There wasn't much technical explanation of why that was the case, but I would imagine that the problems were similar to why SLI/Crossfire scaling is often a lot less than perfect (even accounting for some penalties), or flat out broken in some cases.

It really comes down to what is economically feasible. Even a colossal die works financially if you can sell them at a high enough price. I suspect that a colossal die comes out a lot less expensive than a stacked die. Maybe AMD eventually solves the problem and we do see a lot of MCM-style GPUs in the future, but the industry will transition to that before doing 3D designs for a GPU. Memory can get away with it since it doesn't put off nearly as much heat, but imagine stacking 4 layers of a GPU, which is basically taking a GPU and trying to dissipate the same amount of heat with one quarter of the surface area. The MCM approach actually works in the other direction since you can spread things out more.
 

psolord

Platinum Member
Sep 16, 2009
2,095
1,235
136
What kind of power is needed, in order for DXR to render something like this in real time?