2013-01-20

Software Process Review

Having been asked to help create a testing guidelines document at work - somehow being asked by a colleague whether there was a one page document for coding conventions escalated to having to create a testing standards guide with processes to ensure compliance - I did a bit of searching on TDD and came to Does software testing methodology rely on flawed data? and then to The Leprechauns of Software Engineering which goes back to some of the original methodology papers and looks for evidence to back them up (but does not find much). First up, I will declare the biases my experience has left me:
  • I like TDD because, for code with well defined inputs and outputs, it means you can verify the work you are doing, and because I get interrupted often and I know what I need to do next without having to write lots of notes in my journal. As TDD encourages more of the system to be testable, this also helps the system to be modular and have well specified behaviour.
  • I don't like 'Scrum' much as has been used in projects I've been involved, because it seems very process heavy and meeting centric, and does not allow individual creativity (you have a good idea one week into an iteration, you can't add it until the next iteration, by which time it's no longer fresh). Every 'Scrum' project I've worked for has been late and buggy; I can't think of a 'waterfall' or 'iterative' project I've worked on which has been as late or as low quality. (The methodologies were employed at different companies, so were not the only variation; also 'scrum', being heavier than waterfall, was much harder to maintain.)
  • Unit and component testing verifies you that the code worked once producing the outputs expected by the designers. It says nothing about multi-threaded code, and nothing about whether it does what the customer wants - you have to use code review, formal verification and user review/user tests for that.
  • Flaws in fielded systems which are due to requirements errors are more costly to fix than those due to architectural, analytical, implementation or configuration errors. I did have numbers for this when I worked for BAE SYSTEMS, but obviously such data are commercially sensitive and so don't get shown publicly (or kept hold of when you leave the company).
So, I read the original Waterfall and Spiral papers, and looked back at XP, and here are some points that seemed missing from many references to pre-Scrum processes:

The Waterfall

The Waterfall process was described by Winston Royce in his 1970 paper"Managing the Development of Large Software Systems." It was created as an improvement over the earlier 'stage-wise' process, where system requirements led to software requirements, which led to [domain] analysis, which led to program design, which led to coding, which led to testing which led to operations. The nomenclature of 'waterfall' and 'stage-wise' originate from Boehm, whose spiral process will be summarised later. Royce observed that what are now referred to as emergent properties (he gives examples of "timing, storage, input/output transfers") cause the issues with the 'stage-wise' process. Emergent properties were only observed at the end of the process during integration, testing and operations, and if negative require a redesign. Royce also observed that feedback had to exist between each stage, illustrating feedback between successive stages and giving an example of feedback which jumped back several stages. The further back the feedback has to jump, the more costly the rework. To mitigate against the cost of rework, he proposed five steps:

"Design comes first"

Rather than spending a long time doing detailed analysis up front (he gives the example of fully worked orbital mechanics formulas), first do a broad-brush architecture and break the system into modules. Allocate resources to these modules so the system can meet its performance and cost requirements. Then let the constraints inform the analysis stage so the system becomes feasible.
(System Requirements)
    -> (Software Requirements)          -^
        -> (Program Design)             -^-^
            -> (Analysis)               -^-^-^
                -> (Coding)             -^-^-^-^
                    -> (Testing)        -^-^-^-^-^
                        -> (Operations) -^-^-^-^-^-^
With the 'waterfall' as presented by Royce in 1970, feedback may exist between each stage, though to reduce the cost of rework, steps are taken to reduce feedback.

"Document the design"

Royce says to "write an overview document that is understandable, informative and current." A document can't be current if it is set in stone. The documentation tells the testing and operations personnel what the system is meant to do - it is not a detailed implementation description. If there is no documentation of the design, then the only way to understand it to change it is to rewrite from scratch. As to how much documentation, Royce suggests about 1500 pages of requirements documentation for a 5 million dollars project (in 1970), which is around 30 million dollars in today's money. 30 million dollars is quite a big project (he was talking about large projects after all) - of the order three hundred man years. Five to ten pages of requirements per developer year doesn't seem excessive - the Scrum teams I've been in have generated about one page per week per developer!

"Do it twice"

Royce suggests creating a prototype and using simulated environments to investigate emergent properties before committing to the final design. He suggests going round the design to test loop twice. This is common engineering practice for hybrid systems.
(System Requirements)
    -> (Software Requirements)          -^
        -> (Program Design)             -^-^
            -> (Analysis)               -^-^-^
                -> (Coding)             -^-^-^-^
                    -> (Testing)        -^-^-^-^-^
                        -> (Operations) -^-^-^-^-^-^
This is the Build one to throw away idea later echoed in 'The Mythical Man-Month' and then taken further in the spiral and agile methodologies.
(System Requirements)
    -> (Software Requirements)          
        [prototype]
        -> (Program Design)
            -> (Analysis)              
                -> (Coding)             
                    -> (Testing)        
        [product]
        -> (Program Design)
            -> (Analysis)              
                -> (Coding)             
                    -> (Testing)        
                        -> (Operations) 
By prototyping, there is a planned feedback path from testing to the next phase of design rather than many feedback paths from all stages to previous ones.

"Plan, control and monitor testing"

"If it is argued that only the designer can perform a thorough test because only he understands the area he built, this is a sure sign of a failure to document properly." Having the only definition of what the software should be doing in the head of the designer is not a good state of affairs for a company, nor is having tests which are not repeatable. Although written before wide-spread automated testing (as far as I am aware), Royce does state that "the computer itself is the best device for [final testing]" and advocate code review, ensuring all paths are tested and use of numerical rather than subjective checks.

"Involve the customer"

"For some reason what a software design is going to do is subject to wide interpretation even after previous agreement. It is important to involve the customer in a formal way so that he has committed himself at earlier points before final delivery." Nothing has changed in the last 40 odd years in that regard . The waterfall process recommends getting a customer on-board and involved in reviews throughout the process.

The Spiral

The spiral process as described by Barry W Boehm in 1988 "A Spiral Model of Software Development and Improvement" is an evolution of the waterfall, based on the following issues:
  • some systems following the waterfall have produced lots of documentation, requiring fully comprehensive documents before signing off each stage
  • "in some areas [...] it is clearly unnecessary to write elaborate specifications for one's application before implementing it" - the areas mentioned are not hardware integration, but creating spreadsheet macros and ui
The spiral takes most of the waterfall, but instead of allowing feedback from one stage to its previous, repeats the whole waterfall process over four or more iterations resulting in a concept of operations, a definition of the system environment, then successive prototypes until a product is produced on the final cycle of the spiral.

Structured Iteration

Although Boehm mentions an iterative process, he describes one without structure. For IT software ( as opposed to embedded or flight software - IT software is software that does not have to integrate with hardware, only other software ) there is no real difference between a simulation and a prototype, or a prototype and a product. A fully features simulation of a tax calculator is a tax calculator; a fully featured simulation of a UAV is not a UAV. Hence the Turing test. So instead of simulation and prototyping, the structured iterative process is similar to a spiral, but has a smaller cycle time - about six weeks for an iteration - and aims to create a functional product at the end of each iteration, which is then tested and the results of the testing feeds back to the next iteration. Iterations continue until the customer is satisfied. The iterative process was the one I mostly followed when I started as a professional in the early 1990s.

The RUP

This adds 'slug diagrams' to describe the amount of effort in each phase in spiral or waterfall processes. I am not aware of any evidence for the graphs. The less said about these the better.

Extreme programming

Extreme programming is a set of practices intended to take the best of the then state of the art and do more of the good stuff - see computer world. "Metsker's project team built a call center application for a now-defunct telecommunications unit at Capital One using XP methods. Although he lauds the productivity gained by such XP methods as unit testing, peer code review and obtaining rapid feedback from an on-site customer, Metsker said his current project won't adopt full-scale XP. " Unit testing, peer code review and obtaining rapid feedback from an on-site customer are all 'waterfall' practices - XP is extreme because it does more of the same, and without quantitative information it's not really possible to say whether Metsker was doing 'XP' or just doing 'waterfall' or 'iterative' properly.