The reason's pretty obvious. Current games are designed to be run with at least a reasonable amount of eye candy turned on. If all the customers were to turn their graphical settings down it'd void the producer's original intent of pushing the graphical envelope (and discrete gpu sales).
You could also look at things this way.
Each game has several components, or layers of rendering. It starts with polygons, then we put meshes and textures over them, and finally we apply shaders. For in game interactivity such as physics the engine must be designed to either run full powered with sufficient resources, medium speed with some features disabled or completely turned off to save resources.
On the most basic level, modeling, such as polygon count and geometric calculations, we know that games such as Crysis sport a base polygon count many times higher in density than that of Half life 2, Doom 3 and Far cry at any given resolution. This by default will put more strain on the hardware, regardless of specific optimizations.
Texture sizes have also exploded in recent years as gpu's are embued with more and more RAM. the 9700 had a paltry 128mb (by today's standards) of DDR ram which also sported much lower bandwidth, so games developed in the same era were coded to maximize but not exceed the capabilities of the specs. Games such as Crysis, which were designed to take advantage of 512mb or more of DDR3/DDR4/DDR5 memory not only had quadruple (sometimes more) the memory buffer available to load textures onto, but also had a much higher bandwidth leeway courtesy of the faster gRam and the PCI-E X16 (and 2.0) interface. Obviously for a game designed to indulge in excess spending of system resources to run on an older system with a fraction of the computing power it would have to make severe compromises that weren't part of its intended programming. This probably results in some textures not even being loaded due to lack of system space, and for alot of models that require many small intricate textures to provide eye candy this is devastating.
Physics and shadows didn't even used to be a major issue. I remember when Doom 3 came out with with its "Ultra graphics" setting gpu-killer, the shadows and shading was considered revolutionary. Soon afterwards those effects became a commodity, than quickly outdated in light of soft shadows, enhanced ambient lighting and all the other shading gimmicks that we see today. Physics was also a big selling point of Half Life 2, although by today's standards they were pretty simplistic and wouldn't hold a candle to Nvidia's PhysX (although havok has come a long ways since then). The complexity of these operations are several orders of magnitudes more resource heavy in their current forms than in their initial 9700pro era states.
The answer is older games were built around older graphic utilities and hardware, which can obviously be expected to run those with as little wasted resources as possible. But when an older hardware is paired with modern coding, some of it far from the old hardware's design envelope or completely beyond what the architects had envisioned, alot of it will simply not be run, resulting in the slimfast version of modern games looking worse than the older games.