Software design schedules (ie. Half Life 2) vs hardware design schedules (ie. GPUs)

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
I have always wondered about project management between hardware companies and software companies. I was reading this thread about GPU designs from ATI and nVidia hitting store shelves in May, and in the thread there was a brief discussion of Half Life 2 and when it's likely to arrive on store shelves - or to ship with the cards.

And this got me thinking, here we have two companies (ATI & nVidia) working on projects with probably around 70 engineers doing a task that is essentially like software writing (the majority of the work designing graphics chips nowadays involves RTL coding and debug- which is very similar to writing and debugging software) and they are coordinating with similar schedules from other groups (the drivers group, for example) and other companies (the company providing the package, the fab vendor making the silicon, and all of the OEM's that are designing boards in parallel with the silicon design). You have all this complexity and all of these people working in parallel and the designs are coming out within a quarter of the schedule that was determined at least 18 months ago - if not more likely to be 24 to 36 months ago.

Meanwhile we have a game company (Valve) designing a game with a team of... let's guess 50 programmers. They are designing software, they are not coordinating with deliverables from a lot of other companies (ok, maybe the box and the CD printer) and they are steadily slipping quarter after quarter.

I don't get it. Three teams doing very similar jobs (programming) to what were originally similar schedules and one can't seem to achieve remotely close to their original project management schedules, while the other two are really close. Looking at Microsoft, I see a similar problem with software that they develop. One might argue that they are fundamentally different tasks - programming software and designing CPU's - but I design chips and I program fairly complex designs and I see a lot more similarities than I do differences.

Personally, I think that software is more tightly coupled with marketing - and there is too strong a tendancy for "feature creep". With all of the teams working together on hardware and the fixed costs to manufacturing (ie. fabs, contracts with OEMs, etc.), there are fairly well defined windows as to when features are "frozen". I don't see similar discipline in software programming. Features are added and changed without deep thought as to the long term repercussions to schedule - or to the long-term health of the product (ie. patches to fix security holes, improve performance, etc.). I have to wonder why isn't there the discipline to freeze features in a timely manner in software design?

Anyone else have any thoughts on the subject?
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Personally, I think that software is more tightly coupled with marketing - and there is too strong a tendancy for "feature creep". With all of the teams working together on hardware and the fixed costs to manufacturing (ie. fabs, contracts with OEMs, etc.), there are fairly well defined windows as to when features are "frozen". I don't see similar discipline in software programming. Features are added and changed without deep thought as to the long term repercussions to schedule - or to the long-term health of the product (ie. patches to fix security holes, improve performance, etc.). I have to wonder why isn't there the discipline to freeze features in a timely manner in software design?

*Good* software engineering practices strive to get away from problems like those. Feature creep is a serious problem in any large, complex system, and so is inadequate planning (and often one contributes to another). If you sit down and do a very thorough design of *exactly* what you want your system to do -- and stick to it -- you're less likely to be tempted to add more features as you go along. Designing by the seat of your pants is likely to lead to lots of "Hey, I think I can make it do nifty thing X if I just spend a day or two working on it!" as you go. And doing that increases complexity, tends to introduce more bugs, and makes testing and debugging take longer than planned. This makes on-time projects late, and late projects later.

In "The Mythical Man-Month", Brooks discusses three phases of how people looked at software design. At first, people talked about "writing" programs -- you'd sit down, type out your program, and then run and debug it. This was the dominant way of doing things for a good decade or so, mostly because with the archaic computers and tools of the day, it was too cumbersome to write extremely large pieces of software. But as time wore on, people developed things like compilers, high-level languages, and timesharing systems, which made it possible to actually write HUGE programs that required teams of people working for months at a time. At this point people starting coming up with rules and practices for controlling the process, based largely on existing engineering principles (as many early computer programmers had been trained as electrical engineers). This is when people started talking about "building" programs, much as you might construct a building or a car. You laid out the plans, set up schedules, and got a team of people to work together to build the different components and then assemble them. However, the past 30 years or so have shown that there are still problems with this approach (such as the ones you have brought up). However, this is still largely how large companies produce software, and for truly enormous projects, it works best. But for smaller pieces of software (or smaller pieces of large projects), lately some authors have been proponents of what could best be described as an organic way of creating software. The idea is to start by creating both a solid plan and a core framework for the program, and then "growing" the software by adding and testing features incrementally. This process has several advantages: it allows for far better results if you need to change your goals partway through, it tends to show continual progress (which is reassuring for the programmers and can provide valuable feedback to the customers early on), and -- perhaps most importantly -- if done right, you always have a working program with at least some tested functionality. With traditional software engineering practices, if you're 75% done, you're just as likely to have 25% (or less) of the features working as 75%, but with this newer sort of system, you're likely to be missing only some of the peripheral features rather than large chunks of the core functionality of the program. This makes it easier to "get it out the door" if need be without it being a total disaster. This latter philosophy is essentially at the core of the "Extreme Programming" movement. However, it does have drawbacks, such as a tendency to waver from the original plan if you're not careful, and that it doesn't scale well to groups of more than 10 or so people. For larger projects, you need a more rigid framework to produce reliable results.

Anyway, that's my (and Brooks') take on it. IMHO, we've still got a long way to go in this field.

As for Valve and Microsoft in particular, they both know that whatever they produce will sell, and probably sell very well. This tends to produce an attitude that the product will ship "when it is done", and that inevitably tends to stretch development times. Valve also suffered a serious setback with the theft of some of their source code a few months back, although how much that should really delay publication is debatable.

 

borealiss

Senior member
Jun 23, 2000
913
0
0
I think you have to understand the industries you are comparing. The hardware industry, take CPU's for example, is a sink or swim one. You either survive by carving out whatever niche your CPU will fit into or you produce something that is competitive or better than your competition. Transmeta has done this to an extent with their low power line of CPU's. Graphics is another example of this. We've seen with some regret the fall of companies such as 3dfx. In the hardware industry, if your product doesn't sell, the company goes under, so I don't necessarily think that the hardware industry is immune from the same plight as the software industry experiences. What's left after all this company-evolution, so to speak, are companies that know how to do things right and be competitive, which involves meeting scheduled product launches. With a lot of software companies, you see a lot of fly-by-night institutions that are here one day and gone the next because the investors aren't happy. But the companies that do do things correctly stay in the market for a long long time, and software has a lot longer life than hardware does by any means. Today's CPU's and GPU's will become obsolete, but good software is golden, and if it is written correctly, it will have an extremely long life. A good example of this are the games Blizzard cranks out. After 5+ years, people are still playing games such as StarCraft like mad. Same goes for the original HalfLife, albeit in the forms of Counterstrike and other mods. If you take companies such as Oracle or IBM's DB2 team, they're more concerned with the longevity of their product, and may not be as antsy as hardware companies to get products out the door. If Nvidia slipped a product launch date and ATI leapfrogs them or vice versa, that could mean the performance crown is lost which could mean a potential loss in sales. With games specifically, there's not so much of a direct competition. Chances are people who buy HalfLife2 will also be highly interested in Doom3, the two don't directly compete with one another. They share market space in the same market segment. And many software companies simply do not have the resources the Intel's and IBM's of this world have, so the validation process for products has many holes in it. One may argue that in the case of Microsoft resources should not be a problem, but then there is the fact that they produce software that runs on 97% of all desktops out there and the shelflife of their products are years in length. They just dropped support for Win9X, a product with a shelflife exceeding 5 years. There is also more pressure on hardware companies to get things right the first time, or another Pentium recall could occur, whereas software companies can release patches and service packs to cover up software bugs or fix them as they crop up out of the woodwork, after the product is released. Software companies know that they can afford product delays because they are competing indirectly, which is why I believe many of them would rather ship a product that is a quality one rather than one that is full of bugs. Hardware companies are not immune to this either, as I'm sure CPU's and GPU's have errata lists that go on and on that get fixed in every revision.
 

toastyghost

Senior member
Jan 11, 2003
971
0
76
The market analysis aspects of this discussion seem pretty well covered, so I'll stick to my original idea on left versus right brained thinking as a primary part of the respective nature of these two industries. Hardware design is encoded similarly to software, but software design is an often creatively driven process (especially in the gaming sector, which seems to be the predominant example here). This may seem a little redundant, but it is so with reason: computers compute. That's all they do and sometimes we overlook this fact. All computer hardware in and of itself serves solely a mathematical purpose. Correspondingly, the purpose of the people creating it becomes almost exclusively mathematical. Make sure the damn thing crunches the numbers right. It's obviously more complicated than that and I'm by no means trying to trivialize the duties or qualifications of those who create hardware. I'm just pointing out that their profession is drastically different in structure from that of people who create software, particularly software whose purpose is to entertain. Entertainment, specifically what a targeted group of people will find entertaining, is an enormous gray area that has not yet been worked down to a precise science, and probably never will be in any of our lifetimes. For this reason, creating or having a part in the creation of a video game is a task clearly better suited to a person who, while understanding the implementation of mathematical concepts, is predominantly a right-brain thinker. Putting enough of these people in a room together will inevitably cause the previously mentioned feature creep, moreso when deadlines aren't so great a concern.

Another factor affecting this issue is what a few friends and I have taken to calling the "Blizzard crap-in-a-box philosophy" of game development, which occurs as follows: If a software company is consistent enough in quality of its releases up to a point, then it can pretty much guarantee a certain chunk of market share for subsequent releases, regardless of the quality (or lack therein) of each product taken individually. Essentially, because Diablo was so damn good, Blizzard could take a crap in a box and it would fly off the shelves, at least at first. Therefore they and other companies with similar records (Rockstar, EA, and slowly but surely Illusion) don't feel the same cramp on their development lifecycle as a newcomer to the industry whose survival depends on meeting certain deadlines and sales criteria.

The above is a result of my considering the posts of others in context to my research in multiple intelligence theory and my experience in the software industry. Interesting topic of discussion, thanks for giving me a ranting target. :)