The main reasons for inflated VRAM requirements

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BD2003

Lifer
Oct 9, 1999
16,815
1
76
I haven't tried VR so I can't really say much. However I believe optics are require to create a virtual image that is focussed away from the 2" that the screen really is. Our eyes cannot accomodate such a close object. Being near sighted I have increased accommodation for close objects without corrective lenses that is.



The size and position of the virtual image has to be correct for a 1080p image. Which most people will say is not closer than approximately twice the height of the screen. For 4k you can be much closer. And hence have a much wider field of view. I'm not afraid that 1080p will be too little for a first generation product. I might be wrong.



No doubt 150fps is going to extremely hard to accomplish and even though NVidia claims this new VR SLi tech they haven't shown this stuff to work properly without microstutter. I have almost no hope these people can get two GPUs to sync correctly enough for VR.



I'm seeing a lot of evidence suggesting the days of games not using more than 4 cores properly are over. Some recent benchmarks are showing 6 and 8 core CPUs coming out ahead for once. So it's going to take some high end hardware and some great coding to pull this off. I'd say still about 2 years away though. Can't even drive one 4k display properly today.


It's VR, you're not viewing a virtual screen. You're looking at things that feel like they're as far away as they would be in real life, not on a flat plane at a set distance. But you can still clearly see the grid of pixels right in front of your face. It's almost hilarious how that between the low pixel density and the warping, UIs and text need to feel like they're plastered on the side of 5 story building that you're standing in front of, and you need to whip your head around to read it all...because otherwise it's completely illegible. Trust me, the 1080p screen in the DK2 feels like you're playing in 240p....it's SO BLURRY. When they play the cutscenes in AI, there is a virtual screen and it's so huge you can only see like a quarter of the screen at once...any smaller/further and it'd feel super low resolution.
 

TechFan1

Member
Sep 7, 2013
97
3
71
Not only do you need 150fps for VR, but the latency from your head moving to the image responding to your head movement has to be under 20ms or something like that.
 

BD2003

Lifer
Oct 9, 1999
16,815
1
76
Not only do you need 150fps for VR, but the latency from your head moving to the image responding to your head movement has to be under 20ms or something like that.

Yup, so no buffering or rendering ahead. These requirements are extreme, and they are non-negotiable.
 

BD2003

Lifer
Oct 9, 1999
16,815
1
76
So this is 5 years away?

From mainstream VR and GPUs getting up to the level that AAA games are today? Prob.

For the enthusiasts ready to drop 2K on a rig it'll be feasible next year, but expect a serious downgrade in terms of perceived resolution and image quality, even compared to 1080p. You cannot compromise in terms of frame rate or responsiveness if you don't want to vom. That 150fps isnt a suggestion, its a requirement, and as I understand it the consumer version will probably be more in the 80-90hz range. (So effectively 160-180fps.)

Supposedly they'll have tricks that can ease the burden somewhat, but it's still going to require a sustained level of very high performance.
 

Bateluer

Lifer
Jun 23, 2001
27,730
8
0
From mainstream VR and GPUs getting up to the level that AAA games are today? Prob.

For the enthusiasts ready to drop 2K on a rig it'll be feasible next year, but expect a serious downgrade in terms of perceived resolution and image quality, even compared to 1080p. You cannot compromise in terms of frame rate or responsiveness if you don't want to vom. That 150fps isnt a suggestion, its a requirement, and as I understand it the consumer version will probably be more in the 80-90hz range. (So effectively 160-180fps.)

Supposedly they'll have tricks that can ease the burden somewhat, but it's still going to require a sustained level of very high performance.

Since I'm just reading this thread now, was a link posted to the 150fps requirement for VR/Oculus? Seems like a death sentence for the 8th gen consoles if its true.
 

BD2003

Lifer
Oct 9, 1999
16,815
1
76
Since I'm just reading this thread now, was a link posted to the 150fps requirement for VR/Oculus? Seems like a death sentence for the 8th gen consoles if its true.


You can hear carmack talk a lot about it just about any time he talks about VR. It's a confluence of things....you need a low persistence and responsive display for it to look and feel natural, so the refresh rate needs to be higher to avoid flicker and nausea. They consider 75hz to be the low bar where it works for most people, 90hz the point where it works for almost everyone. Each eye needs a separate frame, so that's two frames per cycle so that's 150 to 180fps. Each frame doesn't necessarily have to be full resolution, so it's not as impossible as it sounds...it's mostly a heavier burden on the CPU. Next gen consoles could certainly pull it off with simplified graphics.
 

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
From mainstream VR and GPUs getting up to the level that AAA games are today? Prob.

For the enthusiasts ready to drop 2K on a rig it'll be feasible next year, but expect a serious downgrade in terms of perceived resolution and image quality, even compared to 1080p. You cannot compromise in terms of frame rate or responsiveness if you don't want to vom. That 150fps isnt a suggestion, its a requirement, and as I understand it the consumer version will probably be more in the 80-90hz range. (So effectively 160-180fps.)

Supposedly they'll have tricks that can ease the burden somewhat, but it's still going to require a sustained level of very high performance.

I'm going to guess at some of the tricks.

One we already know pretty well and that's motion blurring. Playing Destiny at 30 fps it feels smooth thanks to a good implementation of motion blurring. However on PC I usually disable motion blurring.

Second I think eyeball tracking will have to be used. Even in real life only light that hits your fovea of your retina is seen clearly. Peripheral vision is not actually clear. So a system must have a high resolution panel maybe 4k or 8k but the rendering engine must be able to adjust resolution to its highest only where the eyeball is focussed and then slowly blur lower resolution output at the periphery of where your eyeball is actually focussed. And of course this has to happen pretty fast in the range of that 20ms number.

I can also imagine one other trick that is similar to the JVC e-shift technology they use in projectors.

There's no doubt tricks will have to be used and many are on the market such as DLP projectors using a color wheel however I'm not sure that particular trick will be useful for VR. It's just that technology leveraging the human eye/brain persistence of vision has been on the market for quite a while.
 

BD2003

Lifer
Oct 9, 1999
16,815
1
76
I'm going to guess at some of the tricks.

One we already know pretty well and that's motion blurring. Playing Destiny at 30 fps it feels smooth thanks to a good implementation of motion blurring. However on PC I usually disable motion blurring.

Second I think eyeball tracking will have to be used. Even in real life only light that hits your fovea of your retina is seen clearly. Peripheral vision is not actually clear. So a system must have a high resolution panel maybe 4k or 8k but the rendering engine must be able to adjust resolution to its highest only where the eyeball is focussed and then slowly blur lower resolution output at the periphery of where your eyeball is actually focussed. And of course this has to happen pretty fast in the range of that 20ms number.

I can also imagine one other trick that is similar to the JVC e-shift technology they use in projectors.

There's no doubt tricks will have to be used and many are on the market such as DLP projectors using a color wheel however I'm not sure that particular trick will be useful for VR. It's just that technology leveraging the human eye/brain persistence of vision has been on the market for quite a while.


Motion blur doesn't really help because it's just blurring the next frame. They need to actually create new frames, or else it doesn't look right and responds too slow. They can't interpolate between frames either, since that also requires looking ahead. The main trick they use is to warp frames based on head movement, and while it works in a pinch, carmack refers to it as "frame rate insurance" because it's only going to cover up very slight dips in performance. I've seen some demos use it and while it works well, you need a high base level of performance for it to be useful. Even a single frame drop is really apparent and completely breaks the VR spell - when you see that little jitter you're instantly reminded that you're in a game.

Eyeball tracking will come eventually but they admit that's a long way off. Something like g-sync would help a lot, but it's basically incompatible with low persistence displays.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
There seems to be a misunderstanding on how the Occulus Rift works. It does not need 150 fps, it needs the 75-90 fps they mentioned. They don't display a full 1080p frame to each eye, they split the screen in half vertically and show half of each to the eye. So its 1080p@75 that is the current requirement. For the consumer model it might very well be 1440p@75 or even 2160@75 but we don't really know yet what will ultimately be accept for an initial version 1. It does not require 150hz. That isn't saying that wont be smoother, its just that most people will feel happy with the motion at 90hz. In reality we are going to need maybe 1000hz before it feels like reality but we will get there when we get there.
 

BD2003

Lifer
Oct 9, 1999
16,815
1
76
There seems to be a misunderstanding on how the Occulus Rift works. It does not need 150 fps, it needs the 75-90 fps they mentioned. They don't display a full 1080p frame to each eye, they split the screen in half vertically and show half of each to the eye. So its 1080p@75 that is the current requirement. For the consumer model it might very well be 1440p@75 or even 2160@75 but we don't really know yet what will ultimately be accept for an initial version 1. It does not require 150hz. That isn't saying that wont be smoother, its just that most people will feel happy with the motion at 90hz. In reality we are going to need maybe 1000hz before it feels like reality but we will get there when we get there.


I'm talking from the CPU's perspective - each eye is a separate view and has to be calculated separately. It's just like it was with stereo 3D, you can't just cut the resolution in half and expect the same frame rate. All the poly setup, shadowing, etc burden gets doubled. Plus you also have to account for the warping required to suit the optics, etc. It requires a lot more than displaying 1080p@75hz on a monitor. The performance profile is a lot closer to 960x1080@150 vs 1920x1080@75.

As far as the resolution goes since they're warping the image it's not a direct 1:1 pixel match anymore, but the initial target is 1 megapixel per eye, so basically the same number of rendered pixels as 1080p before the warp. The result is still really blurry though because of the field of view, the artifacts of the optics, the warping and the half effective resolution per eye. Any sort of aliasing, spatial or temporal is also way more annoying because it breaks then stereoscopy. I think a lot of people are going to be surprised how harsh the image is and how immersion breaking even the slightest artifacts are...and how much GPU power you need to throw at it to make it look acceptable because of that. Normal mapping and transparencies aren't that effective in faking geometry anymore as well, what you need is lots of triangles and lots of tessellation.

I'm just saying, if you think specs are getting inflated now, VR takes it to a whole new level.
 
Last edited:

kasakka

Senior member
Mar 16, 2013
334
1
81
Having tried the original Oculus Rift development kit, I'd say the resolution for that was way too low, actually a good simulation of what the world looks like to someone who's nearsighted but not wearing corrective glasses - blurry. That sais it was still very immersive and I hope that in a few years we will have good enough displays for VR to work great.

Even on the original I wasn't really bothered by lag but I think it will be a noticeable improvement on the newer model.

This is getting rather off-topic but I think for VR the important steps will be enough resolution, low lag, as little blur as possible (strobe backlight?) and above all a character control method suited to the goggles. The last one I still haven't seen.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Yes you have to render each eye from a different location, but at half the resolution. So the amount of work ends up the same.

Lets do the maths shall we assuming 1080p and 90hz, and we'll count on the basis of pixels rendered.

Without 3D and two separate views we have to render the 1920x1080 pixels 90 times a second which comes out as 186624000, 186.6 million pixels a second.

With occulus each eye is getting 960x1080 pixels. But we have too render that twice and again at 90 hz which is 960x1080x90x2 = 186624000 pixels per second.

Yes there might be a small amount of additional CPU setup for each perspective, but since the original GeForce we have hardware transform so most of the view perspective adjustment is done in hardware and hence the overhead is really quite small. The thing that takes time on modern GPUs is the post processing per pixel, so the pixel count is what matters the most. Simply put its a non issue, the pixel count is identical and that is what will dominate the time and does in the current prototypes.
 

BD2003

Lifer
Oct 9, 1999
16,815
1
76
Yes you have to render each eye from a different location, but at half the resolution. So the amount of work ends up the same.

Lets do the maths shall we assuming 1080p and 90hz, and we'll count on the basis of pixels rendered.

Without 3D and two separate views we have to render the 1920x1080 pixels 90 times a second which comes out as 186624000, 186.6 million pixels a second.

With occulus each eye is getting 960x1080 pixels. But we have too render that twice and again at 90 hz which is 960x1080x90x2 = 186624000 pixels per second.

Yes there might be a small amount of additional CPU setup for each perspective, but since the original GeForce we have hardware transform so most of the view perspective adjustment is done in hardware and hence the overhead is really quite small. The thing that takes time on modern GPUs is the post processing per pixel, so the pixel count is what matters the most. Simply put its a non issue, the pixel count is identical and that is what will dominate the time and does in the current prototypes.


You're being way too reductionist about it. Pixel count is just one (albeit very important) aspect. My experience with years of stereo 3D games has shown me that it's by no means a non-issue. Just because I could run it in 1080p@60 was no guarantee I could push 1280x720@60 in 3D. There was significantly more CPU usage, and you're effectively doubling the amount of geometry vs a single 2D frame. It was certainly not a non issue, that overheard became a bottleneck in some games...the really nice looking ones, of course.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
You're being way too reductionist about it. Pixel count is just one (albeit very important) aspect. My experience with years of stereo 3D games has shown me that it's by no means a non-issue. Just because I could run it in 1080p@60 was no guarantee I could push 1280x720@60 in 3D. There was significantly more CPU usage, and you're effectively doubling the amount of geometry vs a single 2D frame. It was certainly not a non issue, that overheard became a bottleneck in some games...the really nice looking ones, of course.


I think BC is right. Think about it this way:
You render one frame that consist of two viewports. All game engine calculations done on frame to frame basis are done once for both viewports. So really, it doesn't matter all that much on the CPU front. More driver overhead probably, and that's about it.

On GPU side, you need more geometry processing because of different angles in each viewport which requires rendering each polygon from two slightly different perspectives - not 100% sure about that. EDIT: actually BC explained that a bit.
 

TechFan1

Member
Sep 7, 2013
97
3
71
If they lock the fps to 30 on PC, that will make for a lot of angry pc gamers. I can't see that as actually being true, that could cost them huge amounts of sales from PC gamers.

Hopefully the VRAM usage gets better when the games are DX12 based, and make use of things like volume tiled resources.
 

BD2003

Lifer
Oct 9, 1999
16,815
1
76
I think BC is right. Think about it this way:
You render one frame that consist of two viewports. All game engine calculations done on frame to frame basis are done once for both viewports. So really, it doesn't matter all that much on the CPU front. More driver overhead probably, and that's about it.

On GPU side, you need more geometry processing because of different angles in each viewport which requires rendering each polygon from two slightly different perspectives - not 100% sure about that. EDIT: actually BC explained that a bit.

There's no need for theory here. I just tested it with my rift. With the same settings, in standard 1080p, the thread with the highest CPU usage is pushing 45%. In rift mode, it's 75%. And this is just in a hallway with nothing going on, and I'm using a 4.5ghz 4790K, basically the fastest single threaded CPU out there.

Also checked with vsync off - Both at 1080p:
In normal mode, it's 220fps...GPU usage is at 99%, I'm completely GPU bound.
In rift mode, it's pushing 81fps, GPU usage is only 80%....so I'm CPU bound. I'm guessing that with anything less than a 4ghz haswell it probably wouldn't be able to sustain the 75fps it needs in this game.

It not an insignificant spike, it's well over 50%.
 
Last edited:

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Is the driver runing two instances on the same "driver thread" when displaying on Oculus? That would explain the increasing CPU single thread bottleneck.

There are many ways to fix this.

1. Don't run two drivers in paralell. Make a dedicated driver that solves the issue at the roots. Seems like software dedicated to the situation may be the most efficient way.

2. Mantle. Mantle proves that CPU bottleneck in normal game scenario is mostly api fault and can be severely limted by more efficient api.

3. If you have to run the makeshift double-driver solution, do so using two separate driver threads.

Does running Oculus make GPU use more, the same, or less vram?
 

BD2003

Lifer
Oct 9, 1999
16,815
1
76
Is the driver runing two instances on the same "driver thread" when displaying on Oculus? That would explain the increasing CPU single thread bottleneck.

There are many ways to fix this.

1. Don't run two drivers in paralell. Make a dedicated driver that solves the issue at the roots. Seems like software dedicated to the situation may be the most efficient way.

2. Mantle. Mantle proves that CPU bottleneck in normal game scenario is mostly api fault and can be severely limted by more efficient api.

3. If you have to run the makeshift double-driver solution, do so using two separate driver threads.

Does running Oculus make GPU use more, the same, or less vram?


As far as the video driver is concerned, it doesn't seem to know its in "VR mode" or whatever. Anyone with alien isolation can put it into VR mode by changing a text string in a cfg file. Whatever's happening is on the game's end.

It certainly seems like the GPU is seeing higher usage - 80% at 80fps vs 99% at 220fps...it's a lot more work per frame.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Pertaining to the discussion we had earlier about AC Unity on the PS4 and Xbox One both being at 900p, apparently a Ubisoft developer chimed in as to why that is in detail. It's unofficial however (from a Giantbomb podcast), so take this with a grain of salt:

"I'm happy to enlighten you guys because way too much bullshit about 1080p making a difference is being thrown around. If the game is as pretty and fun as ours will be, who cares? Getting this game to 900p was a BITCH.

The game is so huge in terms of rendering that it took months to get it to 720p at 30fps. The game was 9fps 9 months ago. We only achieved 900p at 30fps weeks ago. The PS4 couldn't handle 1080p 30fps for our game, whatever people, or Sony and Microsoft say. Yes, we have a deal with Microsoft, and yes we don't want people fighting over it, but with all the recent concessions from Microsoft, backing out of CPU reservations not once, but twice, you're talking about a 1 or 2 fps difference between the two consoles. So yes, locking the framerate is a conscious decision to keep people bullshiting, but that doesn't seem to have worked in the end.

Even if Ubi has deals, the dev team members are proud, and want the best performance out of every console out there. What's hard is not getting the game to render at this point, it's making everything else in the game work at the same level of performance we designed from the start for the graphics. By the amount of content and NPCs in the game, from someone who witnessed optimization for lots of Ubisoft games in the past, this is crazily optimized for such a young generation of consoles.

This really is about to define a next gen like no other game before. Mordor has next gen system and gameplay, but not graphics like Unity does. The proof comes in that game being cross gen. Our producer (Vincent) saying we're bound with AI by the CPU is right, but not entirely. Consider this, they started this game so early for next gen, MS and Sony wanted to push graphics first, so that's what we did.

I believe 50% of the CPU is dedicated to helping the rendering by processing pre-packaged information, and in our case, much like Unreal 4, baked global illumination lighting. The result is amazing graphically, the depth of field and lighting effects are beyond anything you've seen on the market, and even may surpass Infamous and others. Because of this I think the build is a full 50gigs, filling the bluray to the edge, and nearly half of that is lighting data."

So if all this is true, not only is the amount of A.I entities being a hindrance, but all of the baked lighting as well, which the CPU is presumably helping to decompress. But don't the consoles have special hardware for decompression?

At any rate, I still fail to see the impact that this would have on resolution, which is exclusively GPU bound.. I think that the PS4 version has higher settings enabled than the Xbox One, even though it runs at the same resolution but Ubisoft just doesn't want to come out and say it as it's effectively admitting that the PS4 version is better..

It makes more sense for them to sacrifice resolution and increase graphical fidelity in other ways such as higher quality shadows, better antialiasing, better ambient occlusion etcetera rather than to go for a higher resolution and scale back those other settings..

The truth won't be known until comparisons are made after the game releases though..
 

BD2003

Lifer
Oct 9, 1999
16,815
1
76
Pertaining to the discussion we had earlier about AC Unity on the PS4 and Xbox One both being at 900p, apparently a Ubisoft developer chimed in as to why that is in detail. It's unofficial however (from a Giantbomb podcast), so take this with a grain of salt:







So if all this is true, not only is the amount of A.I entities being a hindrance, but all of the baked lighting as well, which the CPU is presumably helping to decompress. But don't the consoles have special hardware for decompression?



At any rate, I still fail to see the impact that this would have on resolution, which is exclusively GPU bound.. I think that the PS4 version has higher settings enabled than the Xbox One, even though it runs at the same resolution but Ubisoft just doesn't want to come out and say it as it's effectively admitting that the PS4 version is better..



It makes more sense for them to sacrifice resolution and increase graphical fidelity in other ways such as higher quality shadows, better antialiasing, better ambient occlusion etcetera rather than to go for a higher resolution and scale back those other settings..



The truth won't be known until comparisons are made after the game releases though..


Could be. Also could also be that the increased resolution shifts the LOD for textures and such a bit further back, and since they're running so close to the edge with the CPU that just increases the chance it'll bottleneck. Or it could also be that they're optimizing only to the extent that they have to in order to hit the predefined 900p target...the squeaky wheel gets the grease.

This really isn't that unusual, I bet the majority of games could have looked much better on the PS3 vs 360 if they took the time to make it so. But usually they just targeted the 360, got the PS3 good enough, and it was pretty close or outright identical in the end. Just because the power is there doesn't mean they take the time to properly use it. The only time I'd be absolutely certain something was fishy was if the Xbox one version looked or performed better.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Could be. Also could also be that the increased resolution shifts the LOD for textures and such a bit further back, and since they're running so close to the edge with the CPU that just increases the chance it'll bottleneck. Or it could also be that they're optimizing only to the extent that they have to in order to hit the predefined 900p target...the squeaky wheel gets the grease.

This really isn't that unusual, I bet the majority of games could have looked much better on the PS3 vs 360 if they took the time to make it so. But usually they just targeted the 360, got the PS3 good enough, and it was pretty close or outright identical in the end. Just because the power is there doesn't mean they take the time to properly use it. The only time I'd be absolutely certain something was fishy was if the Xbox one version looked or performed better.

The Xbox 360 had a faster GPU and more available GPU memory, for most games it simply was the more powerful system.
The Cell could have been used to do unique things that would be difficult to do on the 360 I suppose, since compute shaders didn't exist yet on the 360, but it's not like the 360 didn't have a fair amount of cpu power with its triple cores.
 

BD2003

Lifer
Oct 9, 1999
16,815
1
76
The Xbox 360 had a faster GPU and more available GPU memory, for most games it simply was the more powerful system.

The Cell could have been used to do unique things that would be difficult to do on the 360 I suppose, since compute shaders didn't exist yet on the 360, but it's not like the 360 didn't have a fair amount of cpu power with its triple cores.


Well that's because the games were designed with the 360 first in mind. I dont think it's too controversial to say that the PS3 exclusives looked a fair bit better than the 360 exclusives. If things had turned out differently and the PS3 was considered the more important platform last gen, you'd likely have seen the 360 ports as iffy as the PS3 ports were.

It's a little different this time around since they're more alike than different, but as we've seen with lots of ports last gen, often they were exactly the same. It probably required a lot more work to get the PS3 version up to the performance of the 360 version, so it may very well be the case that the Xbox one version gets a lot more careful optimization while the ps4 version runs better with less effort. Just because the hardware is more powerful doesn't guarantee that it'll always be the better version, just that it'll rarely if ever be the worse version. People seem to have forgotten that.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
The PS3 had more theoretical performance on offer, but due to its odd architecture it took a lot more effort to unlock the potential. With most games being straight ports the PS3 ended up being hard to extract performance for and ultimately many developers rather than spending considerable time to play to PS3's strengths, that no other system would benefit from instead just let the PS3 port have worse graphics and resolution.

To some extent I think we are seeing a reverse of that in the XB1. The fancy high speed cache they have is faster than the PS4's memory, but to properly utilise it requires specialist programming, with the base hardware somewhat slower and with this specialist feature to accelerate it the end result is cross platform developers are going for the easier option of graphics reduction.