Posted by Cary Toor

I am going to set some boundaries for this discussion and focus just on the lifecycle phases of Coding/Unit Testing through Integration Testing.  Collecting requirements, the prerequisite for these phases, has its own set of challenges including: (1) getting business stakeholders and application users (which I will refer to as users) to focus on the process, (2) getting users to actually understand the requirements which are identified (which is why you might want to use visualizations), and (3) getting signoff before design and development.  For the purposes of this discussion, let’s assume that the requirements collection process was a success (meaning that the requirements collected actually come close to what needs to be built).  I am also going to push User Acceptance Testing (UAT) outside of this discussion, since it deserves its own in-depth analysis of its impact on scheduling, delays, and customer satisfaction.

If we lop off UAT, I think it is a reasonable assumption that a typical projects break down roughly as follows:

  • Requirements and Design: 25% – 30% of the project effort.
  • Coding and Unit Testing: 50% – 60% of the project effort.
  • System Integration Testing (SIT): 25% – 30% of the project effort.

Once the requirements are collected, the process for creating a project plan is basically as follows:

  1. Group the requirements by functionality
  2. Break the project down into modules (or Sprints in the Agile world) containing related functionality
  3. Devolve modules into specific coding tasks
  4. Estimate the time required to complete the coding and unit testing for each task
  5. Estimate the time required for Module-level and System Integration Testing.

It is my belief that most experienced senior developers are fairly good at estimating how long a coding task will take in their environment (and may even over-estimate the time required).  Of course, project plans developed with more granular tasks that have fewer external dependencies can be estimated much more accurately than project plans developed with more expansive tasks that have a higher number of external  dependencies.  So, build your project plan with that in mind.

If, as I previously suggested, senior developers are pretty good at estimating the time required for development, why are projects always falling behind schedules?  Where may be good at estimating coding time, we are terrible at estimating the testing time!

Whether development is done using Agile Sprints or traditional milestones, at the end of each related series of tasks it is necessary to test the modules to make sure that all of the individual tasks work together.  I will refer to this Module-level Testing.  This is the root of schedule breakdowns. 

 

Scheduling Considerations for Testing

Before I talk about Module-level Testing, I want to take a short detour and discuss two factors which drive the time and cost of testing:

  • One of the major themes of successful software development is that testing is most likely to find and resolve problems (issues, bugs etc.) quickly when (a) done on small pieces of code and (b) done closer to the time of actual coding. The corollary to this is that as a project moves further away from a coding task and into larger pieces of code, testing is more difficult, more time consuming, more expensive, and less predictable (as shown in the table below).  This means that successful scheduling requires that problems are identified as close to the coding task as possible.  
  • Testing is a cycle which generally works as follows:
    • A QA tester (typically either an analyst or QA person, but not a developer) tests the application, identifies a problem, and documents that problem for the developer to fix.
    • The developer attempts to duplicate the problem. If the developer can duplicate the problem, he or she can resolve that problem. If the developer cannot duplicate the problem, he or she will send it back to the QA tester for more information. 
    • The QA tester verifies that the problem is resolved. If the problem is not resolved, the tester will document the issue and send it back for the developer for resolution.  If the problem is resolved, the QA tester will close the problem and move on.

Although this is a straightforward and standard process, it is the number of iterations through these steps for a single problem that can kill a project schedule and budget.  In my own measurements of this process over a large number of projects, I have found that the top 25%-30% of developers can resolve a problem in 2 iterations or less (but never in 1 iteration).   In general, a strong team of a QA tester and a developer can reliably close issues in 3 iterations or less.  A weak QA tester and/or developer can push this into the area of 4-plus iterations. 

 If a project can close problems in 2 iterations or less on average, the test/fix cycle will take half the time it would take if it averages 4-iterations (or be able to fix double the number of bugs in the same amount of time).

 

Module-Level Testing

Moving back to Module-level Testing, let’s start with the assumption that the Coding phase of the project includes unit testing, which ensures that each coding task (a) does not break and (b) meets it associated requirements.  This needs to be done by the developer immediately after the completion of coding and prior to the developer moving onto another task.  Alternatively, unit testing could be done by a QA tester in conjunction with a developer, but this is not the standard approach due to (a) a lack of testing resources available on a typical project, (b) additional time required in the schedule, and (c) significantly increased cost.

For unit testing to be effective, each individual task in the project plan needs to be defined so that it can be tested independently of other tasks (as much as possible), either using automated test cases or manual testing.  Unfortunately, for whatever reason (including the fact that unit testing is nobody’s favorite activity), many times unit testing is done poorly or not at all, and the developer moves on to the next task before properly completing the unit testing in the prior task.

Module-level testing should be done by someone other than the developer, such as an analyst or QA tester.  Since unit testing is typically done by the developer, this is the first time someone other than a developer will put his or her hands on the code. 

We can expect the following issues to come up in module-level testing:

  • If the developers (or even one developer) haven’t done a great job of unit testing, this is where the remaining unit test problems will most likely be discovered and resolved. Fixing unit test bugs during time reserved for module-level testing takes time away from module-level testing.  Further, to keep the project on schedule, this level of testing becomes much less thorough than planned (which will cascade to Integration testing down the road). 
  • Because individual tasks are brought together, this is typically the first time that work flow and business logic can be tested, causing the identification of inconsistent and/or incomplete requirements, and bad assumptions on the part of the development team. In many cases, requirements clarification is needed, a process which can cause a significant time lag while talking with and building consensus among users. 
  • In Agile development (and hopefully in more traditional development methodologies as well), completed work is shown to the users at this point for feedback. User may request changes which, even if relatively simple to implement, don’t typically get thoroughly tested, cascading additional problems down the road to Integration Testing. 

All of these issues are difficult to put into a project plan and very difficult to forecast.  However, the pattern is clear.  Problems not resolved in Unit Testing reappear in Module-level Testing, where they are more time consuming and expensive to resolve than they would have been during Unit Testing.  Moving forward, issues not resolved in Module-level Testing will reappear in Integration Testing, where they are yet more time consuming and expensive to resolve.

 

System Integration Testing

In System Integration Testing (or just integration testing for short) all of the separate modules in the application are brought together and tested.  The project will experience the same problems as it did in Module-level Testing, although magnified by the larger amount of code and further distance from the original coding task.  In other words:

  • Unit test bugs not found in unit testing or module testing will cascade down to integration testing.
  • Module level bugs not found in module-level testing will cascade down to integration testing.
  • Additional work flow and business logic problems and inconsistencies will be discovered at this point and may require additional consultations with users.

If the project did a poor job of unit testing, which reduced the amount of time available for module-level testing, the number of problems cascading into integration testing will increase.  The amount and number of these problems is difficult to predict and schedule for, and resolution is more time consuming, and expensive at this point. 

During testing, developers typically work from a list of issues with a limited amount of time to fix each issue.  As discussed above, it takes multiple iterations to close these problems, with increasing iterations increasing time and cost. Additionally, each fix attempt may cause new problems somewhere else in the application.

Another complication at this point is managing project staff.  As you get closer to the deadline, it is everyone’s inclination to add, and certainly not to reduce, staff.  Many times at this point in the project, the business people responsible for the project are putting pressure on the project team to add resources to get the project wrapped up. Unfortunately, this is not the best strategy and may even introduce substantial delays. 

I have found through a lot of experience that early in integration testing the best strategy is to get the original developer to fix problems with his or her code and reduce the number of touches on any individual piece of code.  Additional “hands” who do not understand the application or the code have a long learning curve at this point and are typically detrimental, not helpful.  As the deadline gets closer, the more hands on the code, the more likely it is that new problems will be created.  This spawns the deadly cycle of fixing one problem and creating another problem somewhere else in the application. 

To eliminate this problem, the project team needs to get smaller, not larger.  Only one developer can be touching each related section of code at this point.  This is essential to getting the project completed!

 

Some Basic Rules

Although I mentioned many of these solutions above, I wanted to summarize some of the concrete steps a development manager can take to reduce the testing problem and, hopefully, keep software projects on schedule.

  • When defining project tasks, make sure they are small pieces of the application, not requiring more than 1-2 days of development. Attempt to limit the dependencies with other tasks and architect the application so that each task can be tested on a stand-alone basis.
  • Verify that unit testing is actually being done and, when it is done, that it is being done thoroughly. I recommend the following:
    • Randomly select coding tasks in which a QA tester works with the developer during unit testing. This will, hopefully, make unit testing better without the resources needed to have a QA tester actually perform unit testing for all tasks.
    • Use automated test cases and have the developer define these case prior to starting development (although you will need to ensure that the test cases thoroughly test the requirements).
  • Make sure that testing at the module level is thorough. Don’t let the project move forward unless Module-level Testing has been completed.
  • A large percentage of the uncertainty associated with testing is related to the number of iterations it takes for a bug to get resolved. Make sure that the documentation provided by your QA tester to your developer is complete and accurate.  This is definitely worth some training time.
  • As the project winds down, do not increase, but reduce the number of developers touching the code.

These rules can make a substantial difference in your ability to meet project schedules.