CSC/ECE 517 Fall 2012/ch2a 2w11 aa

From Expertiza_Wiki
Revision as of 20:16, 23 October 2012 by Annice (talk | contribs) (→‎References)
Jump to navigation Jump to search

Introduction

Test driven development (TDD) is a process that tries to create the minimal amount of code to meet customer's expectations. The idea is to test first, code second, then improve (or refactor) last. This process forces the software developers to focus on customer specifications and validation first. Since at each step of the way the programmer proves to himself that the code meets specifications, TDD gives the programmer confidence. The rest of this chapter gives the motivation for TDD, shows the steps for TDD, outlines the principles of TDD, and provides examples using TDD.

Motivation for TDD

  • Testing is the one activity that improves the quality of code. In various design scenarios, like waterfall model, testing occurs towards the end of the project development activity. As shown in the figure 1, introducing testing as a latter phase increases the cost incurred for implementing the changes. From this we infer that there is a high chance of reducing the costs by moving the Test phase to the initial part.
  • The client can also be well informed about the design and can suggest changes which can be incorporated well in advance. This approach also known as TDD helps achieve flexibility to achieve the client’s ever-changing requirements

Principles

  • Tests serve as examples of how to use a class or method. Once used to having tests that show how things work (and that they work), developers start inquiring if a test was already there for the piece of code to be developed
  • Developer tests are distinctly different from QA (Quality Analysis) tests and should be kept separate. QA tests target features and treat the system as a black box. Unit tests created by the developer operate at a lower level and test different things.
  • Name the tests carefully. For Example, name test packages like the package being tested with a suffix. For example, the "DataAccess" project/package is tested by "DataAccess.Test". Also, name test classes the same as the class under test with the suffix "Test". For example, the class "PrintManager" is tested by the test class "PrintManagerTest". This convention makes it easy to find the related class and keeps the class name a noun. You should also name test methods the same as the method being tested with the prefix "Test". For example, the method "PrintProductOrder()" is tested by the method "TestPrintProductOrder()". This convention keeps the method name a verb that reads as an english phrase.
  • Write each test before writing the method under test. It encourages the developer to think as a user of the target method before thinking about implementation, which usually results in a cleaner, easier-to-use interface.
  • Follow the "3-As" pattern for test methods: Arrange, Act, Assert. Specifically, use separate code paragraphs (groups of lines of code separated by a blank line) for each of the As. Arrange is variable declaration and initialization. Act is invoking the code under test. Assert is using the Assert.* methods to verify that expectations were met.
  • When writing application code, only write enough code to make a test work. This technique prevents gold-plating and ensures that you always have a test for the code you write.
  • When you find you need to refractor working code, refractor and re-test prior to writing new code. This technique ensures your refractoring is correct prior to adding new functionality and applies to creating new methods, introducing inheritance, everything.

Steps

Follow these steps:

  • Understand the requirements of the story, work item, or feature that you are working on.
  • Red:Create a test and make it fail.

Imagine how the new code should be called and write the test as if the code already existed. Create the new production code stub. Write just enough code so that it compiles. Run the test. It should fail. This is a calibration measure to ensure that your test is calling the correct code and that the code is not working by accident. This is a meaningful failure, and you expect it to fail.

  • Green: Make the test pass by any means necessary.

Write the production code to make the test pass. Keep it simple. Some advocate the hard-coding of the expected return value first to verify that the test correctly detects success. This varies from practitioner to practitioner.If you've written the code so that the test passes as intended, you are finished. You do not have to write more code speculatively. If new functionality is still needed, then another test is needed. Make this one test pass and continue. When the test passes, you might want to run all tests up to this point to build confidence that everything else is still working.

  • Refractor: Change the code to remove duplication in your project and to improve the design while ensuring that all tests still pass.

Remove duplication caused by the addition of the new functionality. Make design changes to improve the overall solution. After each refactoring, rerun all the tests to ensure that they all still pass.

  • Repeat the cycle. Each cycle should be very short, and a typical hour should contain many Red/Green/ Refractor cycles.

Examples

Homework Grades Program

Setup

As a simple example, we are creating a program that keeps track of our homework grades. We envision that we would be able to get the average of these homework grades. Step one: write a test . Let's test an average function

  myHomework = new Homework();
  myHomework.grades = [100, 50];
  assert(myHomework.average(myHomework.grades) == 75);

We will get multiple errors - this test won't even compile (but, that's ok for now). Let's take a look at what will generate error messages:

  • class Homework not declared
  • Homework constructor not declared
  • field grades not declared
  • method average not declared

Now, we fix the first error:

class Homework {

}

Second error:

Homework(void) {

}

Third error:

int * grades;

Fourth error:

int average(int * grades) {
  return 0; // default return value
}

Finally, the test compiles! The code now looks like this:

class Homework {

  int * grades;

  Homework(void) {

  }

  int average(int * grades) {

    return 0; // default return value
 
 }

}

Red

Now, we run the test, and the familar red bar of failure greets us (remember the mantra red-green-refactor). The assert fails. The average function needs to actually average something (not just return 0). As we think about averaging the grades, we realize we need to know how many grades are in the int array grades. So, we add to the code:

class Homework {

  int * grades;
  int numGrades;                         // new

  Homework(void) {

  }

  int average(int * grades) {
 
   int avg = 0;                         // new
 
   for(int i = 0; i < numGrades; i++) { // new
     avg += grades[i];                  // new
   }                                    // new
 
   return avg/numGrades;                // new
 
 }

}

Of course, we must remember to change the test to:

  myHomework = new Homework();
  myHomework.grades = [100, 50];
  myHomework.numGrades = 2;
  assert(myHomework.average(myHomework.grades) == 75);

Green

Success! We have a green bar when we run it.

Refactor

The last step is refactoring. Perhaps we don't want a grade to be an int? Should it be an unsigned int? For this simple example, there isn't much refactoring to do, but in a larger example there may be multiple areas for improvement.

More Examples

See Test-Driven Development by Example by Kent Beck for more examples.

Characteristics Of A Good Unit Test

A good unit test has the following characteristics.

  • It is important that the test executes fast. If the tests are slow, they will not be run often.
  • Environmental dependencies such as databases, file systems, networks, queues, and so on which will slow down the tests are either separated or simulated. Tests that exercise these will not run fast, and a failure does not give meaningful feedback about what the problem actually is.
  • The scope of testing is very limited. If the test fails, it's obvious where to look for the problem. It's important to only test one aspect in a single test.
  • The test should run and pass in isolation (on any machine). If the tests require special environmental setup or fail unexpectedly, then they are not good unit tests.
  • Such a test often uses stubs and mock objects. If the code being tested typically calls out to a database or file system, these dependencies must be simulated, or mocked. These dependencies will ordinarily be abstracted away by using interfaces.
  • The intention of testing is clearly revealed. Another developer can look at the test and understand what is expected of the production code.

Benefits Of TDD

  • Automated testing: Automated system makes the testing much more easier and faster. Manual testing is slower and needs human interaction which is error-prone. Automated testing makes sure any particular test is not missing every time we test the system, whereas humans may leave a test out mistakenly during testing process.
  • Better to test New/Modified Functionality: With manual testing whenever you add new functionality or modify existing functionality, QA personal needs to test all systems that may be affected due to the modification which may be time-consuming. This may also leave a hidden bug in the system after modification. With TDD whenever a new functionality is added or existing functionality is modified, the whole set of tests in the system is run to make sure existing tests are passed. This makes sure that existing functionality is not broken.
  • Developer’s confidence: With TDD, developers are more safe to change the system as any inappropriate changes will make some tests to fail. With non-TDD system developers needs to take more care to change existing system as the new change may fail other functionality. But with TDD developers can do so easily as after modification if developer runs the test he/she will find immediately that the change has failed some other tests (i.e. some other part of the system).
  • Manual testing stage is shortened: In non-TDD development, as soon as developers prepare a build, QA personal starts testing. This manual testing takes a reasonable amount of time and increases the time to deliver the product from the build time. With TDD, a major part of the system is tested during development and needs lesser QA involvement in the time between build and final delivery of the product.
  • Alternative Documentation: Unit tests are kind of documentation to system. Each unit test tells about an individual requirement to the module or system. For example, the following test ensures that only a logged in user with proper balance can buy an item when the item exists in stock. This test reflects a user aspect scenario of the system.
    public void CheckUserCanNotPurchaseItemWithoutBalance(){  
    LoginToTheSystem();       
    SelectAnItemToBuy();       
    MakeSureItemExsitsInStock();       
    MakeSureUserHasBalanceToBuy();      
    RunTheTransaction(); 
 }			
  • Better way to fix bugs: In TDD approach when QA finds a bug, developer first writes a test that generates the bug (that is the test will fail due to this bug). Then developer modifies code to make sure that the test is passed (i.e., the bug is fixed). This approach makes sure that next time the bug will not appear in the system as a test is written in the system to take care of the bug.
  • Repetition of the same bug reduced: With TDD once a bug is found, it's put under test. So every time you run the whole test in the system, the tests associated with the bug are run and make sure the bugs are not generated again.

Shortcomings Of TDD

  • The actual database or external file is never tested directly by TDD (only production code can test it)
  • If not used carefully, it can add to the total project costs and complexity of the project
  • TDD is unable to test UI.
  • TDD is highly reliant on Refactoring and Programmer Skills.

Conclusion

Danielle

References