I don't think there has been necessarily an "advancement" in the graphics of animated movies, after all, it's not like the animators have an API to work with, right? I mean, if they wanted to put a special effect like heat bending light back in Toy Story, they would've been able to do so.
They do have API and several of them that are custom written for just professional graphics. Mental Ray programmers are some of the highest paid people in graphics work. Others work with API like euphoria to adapt them to the current work. Every film you see at the theater probably had at least 2 programmers and often as many as 10 or more. The reason we do not see effects like heat or bump mapping or anything else is often because it goes against the look the director wants to achieve. The other issue is the time it takes to render out scenes. Some examples from my current workload.
Scene 1 : 4096 x 2340 35mm film aspect 1.75 , total ploygon count 9,721,222 , total texture count 173, total texture size 3,221,428,931 , render time 4 cores, 9 min 21 seconds
Scene 2 : 4096 x 2340 35mm film aspect 1.75 , total ploygon count 7,274,219 , total texture count 181, total texture size 2,003,163,295 , render time 4 cores, 14 min 04 seconds
Those times are for each frame. I need 24 frames for one second of film, so for Scene 1 it takes ~ 216 minutes for each second of film or over 3 hours. Or 7 days for one minute of film. Massive render farms speed that up but even the largest render farms average 30 seconds on a frame . So each time the film is rendered out for preview it can take 30 days of render time. So we generally use previews without ray tracing or particles until it is really needed.
Using GPU for rendering right now is not practical for film. The reason is that so much of the industry relies on things like mental ray that were designed to be run on a cpu and porting that to GPU has been a nightmare for those that tried it. Nvidia tried it with gelato but the majority of features artists use were not supported. The renderers themselves have changed a lot over the years. Renderers now are very complex making it even harder to port to a GPU .
One thing I want to clear up is raytracing. Raytracing is assumed to be, by a lot of people, the shiny reflections on objects. And while it is used for that, it is not its primary use by artists. A good example is a product called V-Ray. V-ray has become one of the industries most used renderers because of its ability to trace every bounce of light down to detail levels approaching absurd. There is no way that video cards will render on that level anytime in the near future. Check out the gallery.
http://www.chaosgroup.com/en/2/galleries.html
http://www.maxwellrender.com/
Vray and also Maxwell are the new trend towards making renderers physically accurate. Previously pixar and others used tricks to render scenes , the artist would guess what the final output should look like and use textures to fake the result. Now the artist can let the renderer do all the work. I can tell it that the room is lit from the east, the temperature of the light is 4300K, the walls reflect light at 5%, the furniture materials absorb 20% and the shadows cast on the floor are reflected back at 11% and it will do all the work bouncing rays around the scene until the energy for that ray is gone. Along its path the ray picks up the color of the objects it bounces off and figures that into the output. It also does something called sub surface scattering. SSS is the effect you get when you place a bright light behind your hand and the edges of your hand glow red. That is light that entered the skin, bounced around under it , then reflected back out.
The tech is there to make CG films photo realistic, it just isn't done because the purpose of CGI right now is to do things that are unique to the genre, not to make it look like a real life filmed movie. It would be like the hand drawn cartoonist trying to create a life like animation, it wouldn't be nearly as entertaining.