Chapter 7. Lesson 7: Design and Code Reviews

Table of Contents
Readings
Report R4: Midterm Report
Summary

Understand the importance and basics of design and code reviews.

Read chapter 8 of the textbook

Write report R4: Midterm Report, including design and code-review checklists

Readings

In the classic Shewart Cycle (plan-do-check-act), most of the PSP activities have fallen somewhere in the "check" stage-- taking data on our performance and performing analysis on that data. While the PSP elements have indeed introduced new tasks, the tasks involved are primarily involved in instrumenting the process; the only new element was a more defined process of estimation.

Chapter 8 of the text moves the process much further into the "act" phase-- taking our data and applying it back into our process-- with the introduction and specification of reviews.

"Doing reviews," writes Humphrey, "is the most important step you can take to improve your software engineering performance" [Humphrey95]. His point of view is echoed by other industry professionals; Steve McConnell, in [McConnell98], writes that "...technical reviews are at least as important in controlling cost and schedule as testing is." On a similar note, T. Capers Jones, in [Jones98], notes that "Formal code inspections are about twice as efficient as any known form of testing in finding deep and obscure programming bugs, and are the only known method to top 80 percent in defect-removal efficiency."

Needless to say, these sorts of claims are very exciting to this young software developer-- very exciting indeed. The introduction of a step which is "twice as efficient as testing" in almost any way is tremendous, considering I'm currently spending about 1/3 of my total process time for C++ in testing (for Eiffel, since I'm only counting code/compile/test, I'm similarly logging about 1/3 of my time in test). Testing is currently the single largest phase of my development process, and it is usually a frustrating ordeal.

Humphrey mentions three types of reviews: inspections (a formal affair with multiple participants and a structured process), walkthroughs (a less-formal process used to educate others and to raise questions or clarify misunderstandings), and personal reviews (a check of your own work to "find and fix as many defects as possible before you implement, inspect, compile, or test the program" [Humphrey95]. Obviously, the PSP, as a "personal" process, will concentrate on the latter.

The statistics Humphrey uses to back his statements are very encouraging, showing an average of 12 students' data which indicate a move from 30 percent of time in compile/test at the beginning of the course down to about 10 percent near the end of the course; the addition of a review process here at lesson 10 marks the beginning of a downward trend. I'm very interested to see if my data are similar. If they are, and if testing is truly twice as efficient, I could have saved about 70 minutes of testing from program 6a, and possibly saved over half an hour total on the project, an appealing prospect.

Humphrey spends quite a bit of time trying to convince the reader that reviews are more effective than testing, noting that "...your biggest single problem with reviews will be convincing yourself that the time they take is worth the effort" [Humphrey95]. I share his concern, but I'm extremely curious to see if my data support any of his conclusions.

To perform reviews, Humphrey gives us a code review script and another rich set of matrices, which I'm sure I will grow to curse in the next project. however, I can see their basis: many of them have to do with yield (how many bugs are caught by review vs other phases) and defect removal leverage (how much more effective reviews are than compilation/testing at finding errors). Since Humphrey's asserts earlier that metrics are the best way to justify reviews, the purpose of these metrics seems clear.

Humphrey makes a particular effort to encourage students to develop their own review checklists, and encourages the student to use defect logs as a partial basis for review checklists. The reason is understandable, and in my case, I can easily see possible results; I have suffered, even in these small projects, from repeated defects such as keeping "const" declarations matched between header files and C++ classes, or the immortal "I forgot to increment my loop counter" defect in Eiffel; you can bet that those two items will end up on my review checklists!