short answer: yes and no.
at the 15x9 res and 90's shaders @ 24fps, a modern gpu could handle the basic render and lighting fairly easily in realtime. the hardware is entirely up to the task. renderman really wasnt that advanced back then.
the problem is film/tv high end 3dCG is done in passes that are composited in 2d. color,specular,shadow,reflection passes are rendered separately and each layer custom blended in the compositor. 3d game render pipelines arent really designed to work that way yet. since all the compositing layer blends/gradients/mattes data is functionally per frame, it cant be used realtime as it would be a huge amount of file data to be loaded for each cut. any scene with fast cuts back and forth for reaction shots would need massive vram to store all the data and ssd arrays to feed that data to the gpu.
also pre-rendered animation typically has separate custom lighting setups for each character, prop, background in a single shot. you could have motivated/rim/fill/highlight/efx lighting on multiple characters on the screen at the same time. loading that data and the compositing data would kill any possibility of realtime rendering.
so a multi gpu could probably render the layer assets in realtime, but the actual compositing of the layer for a frame would push it outside the 24fps range.