Can We Do Toy Story Yet?

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
So back in 2010 John Lasseter said Pixar's render farm had the power to render Toy Story 1 in realtime. Is a rig with quad SLI Titan X equal to a 2010 render farm? Could we finally do Toy Story in real time on such a desktop?

I feel like the "can it do Toy Story" metric was maybe passed without anyone noticing. Did we finally hit that holy grail of PC gaming?

Thank you in advance.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
Doubt it's achievable for mere mortals for some time to come.

Interesting topic.
 
Last edited:

Ban Bot

Senior member
Jun 1, 2010
796
1
76
There is a difference between replicating the technique in real-time and replicating the result. Offline rendering uses gobs of memory and often composition which are not easily to replicated and automated in real-time rendering.

That said taking TS1 level geometry and using modern rendering techniques I believe we surpassed the "end result" a long time ago. Rendering is all about "hacks" and the quality and precision of many modern techniques gives a much better image quality than what we had with TS1. If my memory and google skills are correct the movie was rendered at 1536 × 922 and 24Hz. The edge quality and filtering are still hard to match in real-time but overall picture quality with modern rendering. I thought the stylized look of AMD's DX11 Leo demo was pretty impressive. This issue goes both ways. There are effects and approaches in modern real-time rendering that would have been very difficult to do (computationally expensive) in 1995 as techniques to efficiently render such were yet developed.

Source: Friends in the industry, years at Beyon3d where this question is asked every couple years.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
If we attempted to do a good port of Toy Story on NVAPI we could probably get it running at maybe 20 frames per second ?
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
This is from May 2011.

The rendering provided an interesting test of Moore's Law. Well, not the real Moore's law, but the one that says that computing power doubles every 18 months. In 15 years, we'd get 10 doublings, which would make modern computers 1000x faster. Our original toy story frames were averaging four hours, which is 240 minutes, so we might naively expect that we could render frames in just 15 seconds. We didn't really achieve that: our average render times were probably on the order of 2-4 minutes per frame (the original productions weren't instrumented to keep accurate statistics on rendertime, and we never bothered to really reinstrument them to do so.

So even today, no. But, I am sure they made that movie with the highest quality output they could (ray tracing?). But if they didn't try and render the same exact thing in the same way and just make something that looks 99% as good using today's techniques, I bet it could be achieved.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
There is a difference between replicating the technique in real-time and replicating the result.

I can see that. Plus I know today we wouldn't do Toy Story, we would have better textures, higher res, etc.

I guess I am asking about the end result and way you can get it. It seems like some games have near that quality as far as pixels used, but nowhere near the AA level.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
Last edited:

taserbro

Senior member
Jun 3, 2010
216
0
76
I remember seeing a video of a siggraph presentation where the Pixar technical people had an indoors house scene from UP (which looks significantly better than Toy Story 1) rendered at close to real time at production quality using two GTX580s (don't quote me on that, my memory on the specific rig they used is blurry) and some experimental render settings.

Strictly using traditional raytracing renderers, I think we are probably still ways off but with experimental rendering methods, I think we've had the ability to render something that looks like Toy Story 1 in real time for a very long time now.
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136
Incorrect, it can be used for anything related to Nvidia GPUs ...

Do you even know what rendering is? It doesn't do rendering. It's a command/control API that allows finer grained control of Nvidia graphics cards features than what rendering capable APIs like DX allow.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Do you even know what rendering is? It doesn't do rendering. It's a command/control API that allows finer grained control of Nvidia graphics cards features than what rendering capable APIs like DX allow.

Yes I do know what rendering is but do you know what NVAPI is ?

Your comments about NVAPI aren't even half right for the most part ...
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136
Yes I do know what rendering is but do you know what NVAPI is ?

Your comments about NVAPI aren't even half right for the most part ...

It's right on their dev page....

https://developer.nvidia.com/nvapi

And before you say the
Frame Rendering
Ability to control Video and DX rendering not available in DX runtime. Requires NDA Edition for full control of this feature.
bullet point is rendering. Its control the rendering not do the rendering. The rendering is done by DX.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Hasn't the problem with GPU rendering always been one of programming, compatibility and precision? Do even modern GPUs, really made for real time rendering possess those capabilities to actually replace CPUs?
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
It's right on their dev page....

https://developer.nvidia.com/nvapi

And before you say the bullet point is rendering. Its control the rendering not do the rendering. The rendering is done by DX.

I didn't say it was rendering but your preemptive guess was a miss ...

BTW their dev page doesn't cover the whole story ...

NVAPI CAN be used for rendering if you've seen any of their blogs or presentations so your still incorrect as to what NVAPI actually is ...
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Hasn't the problem with GPU rendering always been one of programming, compatibility and precision? Do even modern GPUs, really made for real time rendering possess those capabilities to actually replace CPUs?

The problem with GPU rendering is primarily memory ...

The reason why render farms are still CPU based has to do with the fact they need tons of memory to store the scene to do any meaningful rendering and PCIE won't cut it since it's too slow for sending large amounts of data so server boards it is ...
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
I think Unreal Engine has a lot of demos that are pushing CGI level quality.

Unreal Engine 4 Kite Demo on a single Titan X.
https://www.youtube.com/watch?v=w6EMc6eu3c8

maxresdefault.jpg
 

Subyman

Moderator <br> VC&G Forum
Mar 18, 2005
7,876
32
86
Looking at the screens, I would be extremely surprised if a consumer GPU setup could render it in real time. The problem is the method of render, not the resulting render itself. The same fidelity of Toy Story could probably be done near real time with different rendering methods, but I'm sure they used ray tracing. The reflections and shadows on the characters are extremely sharp, so you can be sure they used a substantial number of passes.

Anyway, its academic. The rendering method used CPUs at the time and outdated methods. You'd probably have port the entire setup to even get it to run on modern hardware.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
You'd probably have port the entire setup to even get it to run on modern hardware.

They did have to do that for the 3D version:

"To get it running with our current operating system was no small feat," says Mr. Lasseter...The process of rendering the films -- or translating computer data into images -- was vastly accelerated by current technology. Where the original "Toy Story" required an hour per frame to create, Mr. Lasseter said, rendering the new 3-D version took less than 1/24th of a second per frame.

http://www.wsj.com/articles/SB125201712352284765

That is the quote that got me originally thinking. The article is from 2009, and Pixar's render farm then could do it in better than realtime. Plus I watched Toy Story 2 and the car scenes (aka ones with a LOT of movement) look to be near the quality of many modern games:

Toy_Story_2_5.jpg


I guess I didn't expect such a difference between CPU and GPU rendering. That Kite Demo looks pretty close.
 

therealnickdanger

Senior member
Oct 26, 2005
987
2
0
The original Toy Story? No problem! 1536x922 resolution, simple models, and very limited lighting, shadow, and texture detail. It wasn't until 2008-ish that they recreated both Toy Story 1 and 2 at a higher rendered resolution for 3-D and Blu-ray releases to bring them closer to Toy Story 3's quality. Expect to see a 4K re-re-release of Pixar films starting next year.

http://www.tested.com/art/movies/449542-finding-nemo-3d-interview/
http://www.avsforum.com/forum/150-b...4-toy-story-1-toy-story-2-comparison-pix.html
http://imgur.com/gallery/EZido

G4dZPOF.jpg
 

Subyman

Moderator <br> VC&G Forum
Mar 18, 2005
7,876
32
86
They did have to do that for the 3D version:



http://www.wsj.com/articles/SB125201712352284765

That is the quote that got me originally thinking. The article is from 2009, and Pixar's render farm then could do it in better than realtime. Plus I watched Toy Story 2 and the car scenes (aka ones with a LOT of movement) look to be near the quality of many modern games:

Toy_Story_2_5.jpg


I guess I didn't expect such a difference between CPU and GPU rendering. That Kite Demo looks pretty close.

Thanks for the info, that is neat. I bet it was quite an undertaking to get that updated.
 

gorobei

Diamond Member
Jan 7, 2007
3,957
1,443
136
short answer: yes and no.

at the 15x9 res and 90's shaders @ 24fps, a modern gpu could handle the basic render and lighting fairly easily in realtime. the hardware is entirely up to the task. renderman really wasnt that advanced back then.

the problem is film/tv high end 3dCG is done in passes that are composited in 2d. color,specular,shadow,reflection passes are rendered separately and each layer custom blended in the compositor. 3d game render pipelines arent really designed to work that way yet. since all the compositing layer blends/gradients/mattes data is functionally per frame, it cant be used realtime as it would be a huge amount of file data to be loaded for each cut. any scene with fast cuts back and forth for reaction shots would need massive vram to store all the data and ssd arrays to feed that data to the gpu.

also pre-rendered animation typically has separate custom lighting setups for each character, prop, background in a single shot. you could have motivated/rim/fill/highlight/efx lighting on multiple characters on the screen at the same time. loading that data and the compositing data would kill any possibility of realtime rendering.

so a multi gpu could probably render the layer assets in realtime, but the actual compositing of the layer for a frame would push it outside the 24fps range.
 

reallyscrued

Platinum Member
Jul 28, 2004
2,618
5
81
If you make sacrifices in geometry and lighting, we can get real close with even the weakest of graphics architectures.

sm3dw1.jpg