• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

What is the maximum # of cpu's that can be used for cgi rendering?

jondeker

Junior Member
For companies like pixar, weta, ilm, etc, what is the theoretical cpu limit?

Can they get up to 1 cpu per frame? 1 cpu per pixel?
 
At a minimum the maximum usable cpu cores would be 1 cpu-core per pixel per frame.

So @ 24 frames/second and say the movie is 90 minutes and rendered at 4k resolution (4096x2160) we are looking at the maximum cpu-cores being no less than 4096*2160*90*60*24 = 1,146,617,856,000

That's 1.1 trillion cpu-cores...
 
At a minimum the maximum usable cpu cores would be 1 cpu-core per pixel per frame.

So @ 24 frames/second and say the movie is 90 minutes and rendered at 4k resolution (4096x2160) we are looking at the maximum cpu-cores being no less than 4096*2160*90*60*24 = 1,146,617,856,000

That's 1.1 trillion cpu-cores...

And then if it was operating with a core per ray emitted...*head explodes*
 
it depends on the scene. ray tracing is more task parallel than data parallel. rather than running the same function over x pixels a ray tracer would do x functions per pixel. for a complex scene with many secondary rays you could send out 128*128 shadow rays or ~16,000 total shadow ray that could all be run on independent cores. you can do the same thing for subsurface scattering, global illumination, refractions, reflections and supersampling. this is about 300,000 threads per pixel or so. the blending of samples will not be perfectly parallel. this is about 700 billion threads per frame.

this is why ray tracing is considered the holy grail of graphics but the enemy of real time rendering.
 
Last edited:
Theoretically you may be able to break it down to one thread per pass per pixel per frame but unfortunately there are limitations in the software that would prevent the maximised use of such an awesome render farm (does the apple store sell 1.1trillion core render farm 🙂)
I know at Dreamworks they have an internal render farm running approximately 5000 CPU cores, (and it still too over 6 months of render time to complete the final frames for Shrek 3).
I use Blender an open source 3D program and it has a maximum thread count of 64. there are plugins and scripts to break apart the frames so you can render them on separate machines but then you may only be able to break those frames into halves or quarters depending on the software. And once the frames are broken up for rendering on separate cores they need to be put back together. So breaking them into individual pixels for rendering may actually cause render times to increase as the pixels would need to be reassembled to form the original frame.
 
Back
Top