Readings

Chapter 8 concerned itself with inspections and their value; chapter 9 moves further into the concept of software quality management, speaking of both software product quality and software process quality. Crosby defines quality as "conformance to requirements" [Humphrey95], meaning, of course, that a software product which does not satisfy the user is indeed one of poor quality. Whether a flawless program does the incorrect thing or a defect-prone program does the correct thing (but crashes half the time), users' needs are not being met. Assuming that the basic requirements are well-stated, then, software quality is very concerned with defects and defect prevention; the more complex the system, the more that defects can combine their effects. The development process risks becoming fixated on the defects which are scattered throughout-- and removing focus from requirements conformance, usability, etc.

Since software is produced by a process, process and product quality are closely intertwined, and since individuals insert defects, it makes good sense to devote a fair amount of the personal software process to defect prevention. Going back to the economics of software quality, Humphrey reiterates the lesson learned in many other places: the earlier you can find and fix a defect, the less effort and time will be involved. A good defect prevention strategy, then, focuses on early detection-- and the focus of early detection Humphrey places squarely onto code reviews.

The numbers Humphrey quotes for the effectiveness of reviews are striking, with yields (percentage of total defects found by inspections) of between 50 and 75 percent for individual code reviews [Humphrey95]. This is strong stuff, when you consider inspections as a "filter" before testing; Humphrey uses an example of a 50KLOC product, entering testing with 50 defects/KLOC, meaning about 2500 defects to find in test, requiring 20,000 or more programmer hours; a five-person project working constantly "might be able to finish in 18 months" [Humphrey95]. If inspections caught 70 percent of the defects,

 

Inspections, at an average cost of 0.5 hours, would find 1750 defects and take 875 hours. The remaining 750 defects would have to be found in test at a cost of about 8 hours each. This whole process would take 6000 hours. While it would still take six to eight months of testing by five engineers, they would save a year.

 
--Humphrey95 

A year! That's worthwile, certainly. In case it's hard to quantify that sort of effectiveness, Humphrey provides us with a number of new metrics, from failure cost-of-quality to appraisal/failure ratio to yield for individual process development steps. By adding "defect filter" steps to the development process (such as design and code reviews), one can monitor the development process, check the effectiveness (yield) of each defect filter, monitor the resources involved (appraisal/failure ratio), and "tune" the process to have the most effect with the least expenditure of resources. One way to tune, by managing yield per phase, takes the sensible approach of tuning inspections to the most likely errors in a given phase (inspecting the design at different levels for major architecture vs logical correctness, inspecting code for code-specific items, etc). By focusing not only on the defects themselves but also the root causes of them, the individual practicioner can attack defects at the lowest level possible, and monitor his effectiveness in doing so.

As with the other sections, these are intriguing ideas. As I'm about to start using inspections for the first time, I'm terribly curious to see how this turns out.