Well, I'm sorry it seems that way. All new code, starting with XP and moving forward, has been written with these security principles in place.Originally posted by: VirtualLarry
The fact that MS has taken that path, seems to me to indication that it's too difficult (or too expensive, in terms of software-development ROI vs. what their products sell for), to implement the "right" fix, so instead MS band-aids stuff instead. At least, it seems that way.
One problem, however, is that there is a huge amount of legacy code in Windows, and some of it until now has incredibly managed to escape thorough security reviews. Server 2003 was really the first release where all unnecessary features/services were aggressively removed/disabled by default. In XP and XPSP1, these principles were known, but their implementation was not aggressive enough: decision makers refused to sacrifice backwards compatibility and "ease of use" for security. That has changed now; all features are challenged regarding whether they are permitted to be "on" by default. Backwards compatibility (and the inevitable "old code" that goes along with maintaining it) is seen as a disadvantage now, due to the security ramifications, rather than a positive feature requirement. Similar practices are shared by the OpenBSD team too, if you recall.
So with all of this legacy code, Microsoft knew that in the 20+ million lines of code somewhere there would be one or more more security problems missed by the security reviews. Perhaps the review hadn't been done yet, or perhaps it was done in a less-than-thorough fashion. Regardless, Windows would have some security flaws, no matter how much time and energy was spent trying to catch all bugs in the trenches.
So, why not do something about those vulnerabilities that slip through the cracks? No matter what you say about being "ugly" or a "band-aid", just about every security expert agrees that these mitigation steps help increase the security robustness of Windows to some degree. Isn't that better than the alternative?
The other advantage to mitigation steps is that they're "low-hanging fruit": they can be implemented once for the entire system, and still catch a large number of nasty vulnerabilities in all kinds of different systems. Whereas it might take years to pour over the code of every subsystem in Windows line-by-line with a fine-toothed security comb, it might take only a few months to develop some stopgap mitigation techniques that will buy some time for the thorough reviews/rewrites to take place.
SP2 included some of both. There were lots of security fixes at the code level; there were many configuration changes to turn things off by default; and mitigation factors like the firewall and DEP were installed to catch things that still had yet to be uncovered by the review/test process.
We are. I guess part of the problem is, this work is largely invisible to you. No matter how much work we put into it, you won't see it. The only objective metric by which you can attempt to gauge security fundamentals work is the number of vulnerabilities found in the latest releases of Windows (XP Sp2, Server 2003).Things like the XP SP2 firewall and DEP/NX are effectively band-aids. I'm not saying that they are bad things, but MS should focus on fixing the core code too, just as much, if not more so, than implementing mitigating workarounds.
But in my experience, there is only a very loose correlation there. In fact, the number of vulnerabilities reported seems to be inversely proportional to the amount of work that goes into preventing them, at least in the past 2-3 years. This of course, is a historical trend which may not continue, but it is puzzling from my anecdotal viewpoint.
Luckily, the vulnerabilities do seem to be getting more and more obscure, which in my opinion means that hackers are running out of "low-hanging fruit" themselves and have now turned to more creative attacks.
I honestly don't know of a good way for us to announce the amount of work we do in this area. It is a fundamental, and it is time consuming, and it is absolutely seen as mission-critical work.
Ok, point noted, but I did say "maybe". 🙂 The only thing that I know about MS and Longhorn and offshoring, is the pair of news articles that I've read in the online press about it.
Primary development on Windows code-named Longhorn is done in-house in Redmond. I doubt very highly that management is trying to hide outsourcing from us by pulling some huge trick on us. (I suppose some of these teams that I never physically interact with could theoretically be a single full-time developer here in Redmond secretly checking in code provided to him by a team in India. But I think it would be highly improbable given the fine-grained tracking system by which the author of every snippet of code is maintained, and widespread or voluminous check-in mails coming from one person would start to look very suspicious.)
Now, for tool projects or test case development, I have seen some limited outsourcing. But from what I can gather, it is mostly seen as experimental to feel out the water. More outsourcing may be on the horizon, or it may not be. I don't know. All I can say with certainty is that the outsourcing going on right now is very minimal in general, and in primary feature development simply nonexistent.