What you are saying is true, but what happens when AMD needs to produce a new architecture on a new process? Doesn't that involve a steep learning curve?
Why would you
need to produce a new arch on a new process? Intel, the undisputed king of RnD/Process/Manufacturing, didn't do that for SB, for example. It made Westmere on 32nm before Sandy Bridge on 32nm. And didn't Anand get around to explaining that already? AMD made the learning curve on the 4770, and what education they got from there they managed to apply to the 5xxx, which helped a lot, compared to nVidia's trials of only small, uncomplicated chips on 40nm.
Anyway, forget about that. In fact, forget about my entire first paragraph. Consumers and OEM's just dont care about invisible things like "architecture". They only care about tangibles like performance, power draw, TDP, silence/loudness, etc. Architecture is like a black box, we don't care about what goes on inside, what matters is what we get as a result of it.
I was not saying "architecture is not important". Rather, I was saying
"new architecture" is not a feature by itself. More performance than last gen is a feature. Lower power draw is a feature. Even less audible fans can be a feature. But just saying "new arch", in the absence of any of the tangible features, is not a feature upon itself. It's not something you can use in isolation to judge two products. It's only better if the tangible features it offers are actually better.
That is why I quoted you and responded in the first place, because you agreed with an obvious trolling post. Architecture, by itself, means nothing if it does not result in the tangible features that are important. So saying "yeah, X company is behind" solely due to having older architecture (despite the obvious advantages in tangible features) is completely misguided. If it was under a specific context such as a GPGPU/compute card, then yeah, arch vs arch AMD is behind - by a lot. But the statement was made with no such context, and in that absence it has no good point at all.
As a developer myself, and a Linux user, I have a soft spot for nVidia. Things were painless with the 8600GTS I bought 3 years ago. With my newer 4770, there were some things I had to sacrifice (obviously, some nVidia specific features), and even some things that were Linux issues, not nVidia features - for example, I can't even play Battle for Wesnoth (and it's not even a 3D game), as it ends up corrupting the graphics driver and the screen is permanently color-inverted until I restart X. That's something that hasn't happened on my 8600GTS, and I went through several versions and distributions of Linux on it, all painless. I even used to run a 3D compositing desktop on it, yet another thing I had to sacrifice on my 4770 on Fedora 13 (thank you, AMD, for only supporting the latest Ubuntu releases, making my Fedora box unsupported due to using certain package versions that have not made it into Ubuntu yet).
But as a gaming card, I can't fault my 4770. And that's the issue at heart here. Yes, GPGPU and everything will take over the world someday, maybe. And yes, I love nVidia because it makes Linux computing experience easier. But the thread isn't about who has a better GPGPU (obviously, that's nVidia), or who has better Linux support (nVidia, no contest), or who loves devs more. The thread is about marketshare, and this is the domain of the OEMs and consumers, who both don't give a crap about anything that isn't one of the tangible features, and also don't care about Linux, and also don't care about what we enthusiasts think. And that's where AMD is ahead right now and have been for the better part of a year.
You should have just known better than to bite that troll post.