You have some liberties when games are designed for your system and then cobbled together for another. Games are designed around a specific hardware set, at a certain resolution with a specific fps target. You can optimize it perfectly for the exact piece of hardware you're targeting, with PC you just add a couple generic tiers and port it out.
As far as bandwidth, the higher your res the more you need, the lower your res the less you need. So comparing the specific specs of the 7900GT to the PS3 is disingenuous given the reasons I've already stated.
Those are just comparing
why they resorted to doing the wrong thing: lack of bandwidth. The resolutions of the displays are largely identical, between PCs and consoles. You need more bandwidth not merely for higher resolutions, but simply to process more values. Add a normal map? More bandwidth needed at the same res. Got a window, or water (transparency)? Ditto. And so on.
Last time, Sony didn't have enough for games of the time of release, while devs for Xbox had to hope they could make good use of the small eDRAM.
Nintendo's decision to wait it out a few years (SD-only) was a very good one, as far as the games went (though they're set to become like Atari, or Sega, now

).
They have less gpu power now respectively vs the current PC market than they did back then. What is being discussed is that like before, PC games will quickly eclipse the settles console games ship with providing higher IQ due to having more power.
That didn't hurt the PS2, did it? The games need to be made within the scope of the hardware they are being used with. They tried to believe they had some kind of major edge, instead of focusing on making the games look good, within the limitations they had, and the results rarely have looked good.
What inefficiencies, draw calls? You have to do them regardless of API.
That is and has been a red herring since the start.
DirectX 9 required draw calls, often flushing to GPU memory, for the simplest of things, like creating a texture object. It's not that you don't have to do them, it's that it was an issue about supporting Windows XP and DX9 in the game engine, and DX9 required way too many such calls, as scene complexity grew, while DX10 can batch most of it into small numbers of such calls. The combination of OpenGL becoming a small niche for PC gaming (and the XB360 not using it), and Vista not being a major success, is basically the draw call limitation issue. It's not there because they are on x86, or because of some GPU feature lacking, but because they needed to support WinXP/DX9 on the Windows/PC side of things, and/or were continuing to use an older engine that they had lots of custom tools for.
The question is if the API overhead causes a noticeable loss in GPU power, given the same GPU and a modern processor would the PC fail to match the IQ, res, and FPS that is offered by the exact same gpu in the console.
But that won't happen, will it? First, we won't have the same GPU--ours have more bandwidth to play with, typically. Second, we'll get far more powerful ones, before games that can look better than the last console gen come out, typically (a trend that using plain x86 might change). It's an enticing idea, but our sea of change, at some significant efficiency costs (mostly in power consumption) nets us better quicker, every time, and will continue to do so. We had more raw power on the day of release last time, and will this time, too. At least this time, the consoles will have lots of VRAM, and the CPUs aren't paper tigers.
The performance loss from AA is huge, and even AMD which is in both consoles is trying to push Forward+ over Deferred rendering.
Making scenes look better had a performance cost? Who knew? AA only remains "free" until some games start giving some GPU a workout. It's happened over and over again, in our PCs. There are also other ways to skin the cat--tiling is coming back, too, with DX10+ (IIRC, Frostbite is going this route). Point is, deferred rendering hasn't been a "bad on PC" v. "good on console" thing. Pretty much every game that's made a point about it has ended up with much better lighting, and/or other special shader effects, and the performance efficiency is also good on the PC. It's a trade-off that is orthogonal to PCs v. consoles. It eats bandwidth and GPU time on both, for some combination of reduced GPU computational load, sometimes smaller intermediate buffers, and/or just better imagery.
The point is consoles aren't rendering a 1080p image on a 1080p screen.
Exactly. They pushed for new hardware, marketed use of it, and then decided to use it like crap. If they couldn't do all the fancy GPU work at native res, then they shouldn't be doing it. 1024x600, 1152x640, 960x544, etc.? Inexcusable. They should have either done SD-only (large amounts of upscaling can look OK, low amounts look bad), or an HD. The issue with resolutions is that there are a small number of acceptable resolutions (480p 4:3, 480p 16:9, 720p, 1080p), and one ideal (your TV's, which is probably 768 lines, screwing you over either way

).