Guys, do you realise that the point of the audit was to inspect the source code for a product whose source code was freely available?
Also, consider the objectives of a spy agency and therefore the techniques they're likely to use. eg. Would they try to make a completely covert attempt against it (ie. none of the TC developers know of the attempt)?
Targeting a specific machine's compiler is pointless, as the code can be (and likely was by the developers in its lifetime) compiled on other machines (though at least it has the merit of greater subtlety and makes some sense in a completely covert attempt against an open-source-style project).
Or would they try to coerce a developer into adding 'compromise code' into the project? If you have to coerce someone who is obviously quite principled (participating in an open source encryption project, what else do you expect them to be?), do you honestly expect them to do an amazing job in covering up the compromise code? Furthermore, is it reasonable to assume that a theoretically really subtle compromise could be done by your average developer with (probably) no experience in writing malicious code that has to pass for normal code when others read and contribute to it?
Or would they introduce their own developer into the project who is skilled at writing malicious yet subtle code that has to pass for normal code when others read and contribute to it? What happens when someone spots what appears to be superfluous code in a module that they know has a bug in? What happens when the malicious code is causing a bug and someone else spots it?
Next, what would be the objective of the code? AFAIK TC probably has very little code with networking functions, so some kind of network broadcast every time a new volume is created would surely be easy to spot in a code audit (as well as being easy to spot by someone watching the network stack of a client running TC). The only way that I can think of that is remotely subtle would be to try and introduce a flaw in the encryption that results in some element being easily predictable, thereby either making it very short work to brute force or at least to shorten the time by an order of magnitude. While IMO this is the most likely objective, consider that the encryption element of TC is the 'bread and butter' of the project, therefore it would be the most actively developed part of the project and therefore the most likely area for hostile code to be spotted (or again, apparently superfluous code to be spotted).
One other small point, would a spy agency want to just target a specific part of the project, let's say code that will only end up in Windows systems, or would they want to go for all platforms the project caters for? Going for the Windows one would make most sense, probably because the Windows-specific code wouldn't get as much attention as the code in the main project yet it would still probably target 80% of clients.
Another small point - a project that's being worked on by a group of developers is likely to be fairly liberally commented to make it easier to come back to. Even for my own projects (for which I'm the only contributor) I comment code that I consider to be vaguely complex with additional explanations of why I did <something unusual> so I reduce the 'WTF' factor when I come back to it later. So the commented explanations for seemingly odd code would need to make sense to another person contributing to the code.
Let's say that a completely covert attempt had been successful then was spotted by the developers, wouldn't they announce it, and welcome an audit but say that TC is offline until the audit was complete?
IMO it would be difficult to actively maintain a compromised open-source-style project. The most likely objective of this whole debacle was to stop people using TrueCrypt, and to do that IMO an agency would lean on key developers in such a way that they felt that they either have to co-operate or kill the project, so they did the latter.