Build a 700 Mhz P3 with a GeForce 4 and 64 MB of video ram and try running Quake III at 512 x 448i with maximum settings, and set com_maxfps = 30 or 60. It will run quite smoothly, with just about any PC equivilent game that you'll find on the X Box. If it looks sucky, just try a CTF or custom map where there is some color and it will look as good as any console game. What about Halo? I'm convinced they made Halo suck on purpose, probably a condition imposed by Microsoft for a PC release... If Carmack worked on the PC port of Halo it would have been fine.
TV is low resolution. You only need to render at 640 x 480 @ 30 fps or 640 x 240 @ 60 fps. In reality its even smaller than that becuase you have to account for NTSC overscan, so your frame buffer area ends up being 512 x 224 or 512 x 448. This also happens to fall on an even page boundry for systems like PS2 where video memory is allocated in pages, so video memory is used efficiently without a single byte wasted.
Certain effects are also cheaper on TV. Metroid prime, as awesome as it looks, uses nothing more than static lightmaps and dynamic vertex lighting. Thats right, vertex lighting. But it looks great on a TV even with poly counts slightly higher than say, Q3A. Another example is greatly reducing the backdrop texture res and poly count for cinematics and focusing ALL graphic power to the characters. This is pretty much how games like Xenosaga and DOA get away with such detailed character models.
This also means you can get away with lower resolution textures. The majority of textures used on TV screens is 32x32 or 64x64, and usually 8 bit. Low res textures are evident even in Halo. And while on a PC has to work on every system even without 64 MB VRAM, main ram is used as a staging area for textures (driving the system ram req. up) and shipped to the graphics card via AGP. On consoles memory is usually unified and there isnt a bus bottleneck (which also allows vertex data to be used by either GPU or CPU with no penalties or shuffling between memory types). Even if all textures stayed resident in VRAM, the game engine still has to be prepared for a task switch on a PC, where the GDI can come in and clobber your video memory. Backups are maintained in system ram in order to replace them immediately when they are needed. On a console, none of this is a problem, since we have a fixed memory layout free from competition from other apps from the time a level is loaded to the time it is unloaded.
64 MB is alot of memory when you don't have a OS, a shell, and a ton of device drivers loaded while running a game. The microkernels that consoles run are very small and generally built into the BIOS ROM so they are not factored into the RAM consumption! Also console programmers are historically used to working with little memory and go through great efforts to minimize memory consumption, up to and including compressing all the text in a simple game cartridge! PC programmers are used to having relatively infinate RAM and HD space. This is why you don't see hundreds of uncompressed .tga files on a console CD-ROM like you do in your 3 GB install dir on a PC. This is esp true for load times, where consoles are limited to streaming compressed data off a slow optical drive.
More efficient code? The general case is the slowest case on the PC. Even if you bloated your code with 20 differant rendering paths for each version of vertex shader and pixel shader, the added overhead of converting generalized data for a particular texture format, or making a decision at runtime on how to acheive something based on hardware capabilities, starts to add up. Programmers can get frustrated with supporting 30 extensions on each vendors card, and just don't support certain features at all. Even Carmack says Doom 3 is the last time he is catering to multiple vendors with specialized rendering paths and the next engine will use all stock GL and vendors better get their act together.
On a PC a call to draw something goes something similar this:
1) Draw
2) kernel32.dll (or the GDI display driver dll like nv4_disp.dll?)
3) ntdll.dll
4) Interrupt 02Eh kernel mode switch
5) current thread yields remaining CPU time and OS has the ball
6) validate user mode parameters and copy to kernel mode memory
7) call display driver
8) call HAL
9) modify hardware
10) find a differant thread to run (system calls yield the remaining CPU time to another thread)
Note on a PC with a secure operating environment where no user process can crash the system, this level of abstraction is desired!
On a Console it's more like this (having written a small OpenGL lib for my PS2 on bare metal):
1) glDrawArrays
2) build command packet and start DMA
3) return to calling thread
None of the validation and OS abstraction is needed. Because the programmers know Kazza and Bonzai buddy and explorer.exe wont be running at the same time, there is no need for such a classical and rigid OS paradigm. If the program doesn't crash running on the developers environment by itself, it won't crash on the users because it's impossible for the user to install other programs, change settings, etc. In fact, an OS on a console pretty much serves the purpose of convenience to provide some standard services, and it sometimes can be bypassed alltogether!
And even with all that, a fairly optimized 700-800 Mhz PC with a GeForce 4 can run some pretty impressive graphics and push at LEAST 30 fps, either being competetive or beating most console games but the most painstakenly hand optimized console titles! Remember to set the resolution to 640 x 480 and the texture detail to medium or low to be on equal terms with a console. To ease the memory requirements that a PC has over a console, you can kill everything but the bare minimum. That includes even the most 'vital' services that allow the PC to be a PC instead of a game console and the 30 or so megs the shell (explorer.exe) takes up by itself, just alt tab back from taskmgr. If you don't think this is true, make sure you run the PC on TV out instead of on a VGA monitor. You'll be surprised.