Chapter 8. Lesson 8: Software Quality Management

Table of Contents
Interlude: And here I am using imperative languages like a sucker!
Readings
Program 7a: Correlation
Summary

Comprehend the basics of software quality management; relate process quality to product quality; use PSP data to measure and track process quality.

Read chapter 9 of the textbook

Write program 7a, using PSP2

Interlude: And here I am using imperative languages like a sucker!

It's a little unusual to do the PSP work in two languages at once, and certainly more work than necessary. However, the industry "standard" language seems to be C++, and I feel it's a poor choice for most applications, and so I'm always looking for alternatives. Eiffel is particularly elegant, and breaks the C++ mixed-procedural-and-OO mold, making for a nice, simple, but powerful language.

However simple and powerful it is, and however familiar I am with C++, it's very obvious by the time it's taking to implement these programs that the two languages, however powerful, need some coercion to do what is really relatively simple numeric work. C++ is a systems language, meant to interface with the operating system and allow us to play with memory at will; Eiffel is designed to be a higher-level OOP language, with contracting and whole-system optimization.

I was dimly aware of functional languages, but what little I knew indicated that they might be good choices for this work. While I hadn't actually written anything in Haskell, I had read a bit about it, and since I had an hour or two to spare at work, I coded up a quick implementation of the correlation algorithm in about 45 minutes, tutorial in hand, and having never touched the language before. Not bothering with the I/O section, I jumped straight into the correlation calculation with premade lists of numbers.

Needless to say, I had a fair number of compilation errors, and when I ran the correlation calculation for the first time, I got 0.9543158; the correct answer was 0.9107. I had forgotten to square the result.

Let's hold that thought; given the correlation algorithm, and the fact that I had never written anything in Haskell before, I had one error in testing, extremely little code, and a functional (if simplified) program. A very, very rough sketch of the code turned up the correct answer in little time. A quick cleanup of the code is shown below; it calculates the correlation of two sets of doubles, x_terms and y_terms, and displays the result. The total comes to 33 LOC, with one physical LOC as one logical LOC. Of these, 8 lines are given for the variance and standard deviation calculations, and one function, square, is really unnecessary (I felt the name was easier to understand than the Haskell equivalent of squaring a number).

average		:: [Double] -> Double
average []		= 0
average x		= sum x / fromInt(length x)

square 	:: Num a => a -> a
square x	= x^2

dcount :: [Double] -> Double
dcount xs = fromInt( length xs )

variance	:: [Double] -> Double
variance xs	=  a * diffsquares
	           where
		     a = 1/( dcount xs - 1 ) 
		     diffsquares = sum ( map square( diffs_from_mean ) )
		     diffs_from_mean = map ( subtract ( average xs ) ) xs

standard_deviation 	:: [Double] -> Double
standard_deviation x	= sqrt (variance x)

mult_terms	:: [Double] -> [Double] -> [Double]
mult_terms [] []		= []
mult_terms (x:xs) (y:ys)	= [x*y] ++ mult_terms xs ys


corr_bottom_term :: [Double] -> Double
corr_bottom_term xs = fromInt( length( xs ) ) * sum( map square xs ) - square ( sum xs )

corr :: [Double] -> [Double] -> Double
corr xs ys = square ( top / bottom ) 
             where
               top = dcount xs * ( sum ( mult_terms xs ys )) - ( sum xs * sum ys )
               bottom = sqrt ( corr_bottom_term xs * corr_bottom_term ys )

x_terms :: [Double]
x_terms = [186,699,132,272,291,331,199,1890,788,1601]

y_terms :: [Double]
y_terms = [15, 69.9, 6.5, 22.4, 28.4, 65.9, 19.4, 198.7, 38.8, 138.2]

main :: IO ()
main = do
         putStr "Correlation: "
         putStr ( show ( corr x_terms y_terms ) )

In its domain, this sort of functional programming is powerful stuff; adding the Beta0 and Beta1 calculations was trivially easy. Flushed with success, I then went on to prove that not everything's easy with Haskell as I spent two hours trying to figure out monadic I/O in order to load the numbers from a file with no success. I was similarly unsuccessful in figuring out how to do Simpson integration in a language with no variables or loops. However, I realize these are errors in my understanding rather than errors with the language. And it does have a great deal to offer: no loop counters to forget to update; no variables to forget to initialize; no complex, branching logic (for the most part). Indeed, once you get over a bit of alienation, I find the Haskell code for the variance calculation, for example, significantly easier to understand than the same code in both Eiffel and C++. Sometimes terse is better. Of course, the only Haskell compiler I could find (Glasgow Haskell Compiler, or ghc), generated a monstrous 343k executable for this wee program, so there are obviously some tradeoffs.

I can't see myself using Haskell as my primary development language (particularly with highly interactive software-- unless I learn a lot more about interaction and functional languages!), but this is another great example of a tool doing exactly what it was meant to do, and doing it well-- and yet another indication of the value of different languages. Too often software developers seem to get stuck in a rut of using only one tool; knowing what else is out there, and how else one can think, could be a great asset. But on to the readings.