CSC/ECE 517 Fall 2012/ch2a 2w11 aa: Difference between revisions
No edit summary |
|||
Line 5: | Line 5: | ||
*Testing is the one activity that improves the quality of code. In various design scenarios, like waterfall model, testing occurs towards the end of the project development activity. As shown in the figure 1, introducing testing as a latter phase increases the cost incurred for implementing the changes. From this we infer that there is a high chance of reducing the costs by moving the Test phase to the initial part. | *Testing is the one activity that improves the quality of code. In various design scenarios, like waterfall model, testing occurs towards the end of the project development activity. As shown in the figure 1, introducing testing as a latter phase increases the cost incurred for implementing the changes. From this we infer that there is a high chance of reducing the costs by moving the Test phase to the initial part. | ||
*The client can also be well informed about the design and can suggest changes which can be incorporated well in advance. This approach also known as TDD helps achieve flexibility to achieve the client’s ever-changing requirements | *The client can also be well informed about the design and can suggest changes which can be incorporated well in advance. This approach also known as TDD helps achieve flexibility to achieve the client’s ever-changing requirements | ||
=Principles= | =Principles= | ||
*Tests serve as examples of how to use a class or method. Once used to having tests that show how things work (and that they work), developers start inquiring if a test was already there for the piece of code to be developed | *Tests serve as examples of how to use a class or method. Once used to having tests that show how things work (and that they work), developers start inquiring if a test was already there for the piece of code to be developed | ||
Line 15: | Line 13: | ||
*When writing application code, only write enough code to make a test work. This technique prevents gold-plating and ensures that you always have a test for the code you write. | *When writing application code, only write enough code to make a test work. This technique prevents gold-plating and ensures that you always have a test for the code you write. | ||
*When you find you need to refractor working code, refractor and re-test prior to writing new code. This technique ensures your refractoring is correct prior to adding new functionality and applies to creating new methods, introducing inheritance, everything. | *When you find you need to refractor working code, refractor and re-test prior to writing new code. This technique ensures your refractoring is correct prior to adding new functionality and applies to creating new methods, introducing inheritance, everything. | ||
=Steps= | |||
=Examples= | =Examples= | ||
Follow these steps: | |||
*Understand the requirements of the story, work item, or feature that you are working on. | |||
*Red:Create a test and make it fail. | |||
Imagine how the new code should be called and write the test as if the code already existed. | |||
Create the new production code stub. Write just enough code so that it compiles. | |||
Run the test. It should fail. This is a calibration measure to ensure that your test is calling the correct code and that the code is not working by accident. This is a meaningful failure, and you expect it to fail. | |||
*Green: Make the test pass by any means necessary. | |||
Write the production code to make the test pass. Keep it simple. | |||
Some advocate the hard-coding of the expected return value first to verify that the test correctly detects success. This varies from practitioner to practitioner. | |||
*If you've written the code so that the test passes as intended, you are finished. You do not have to write more code speculatively. If new functionality is still needed, then another test is needed. Make this one test pass and continue. | |||
When the test passes, you might want to run all tests up to this point to build confidence that everything else is still working. | |||
*Refractor: Change the code to remove duplication in your project and to improve the design while ensuring that all tests still pass. | |||
Remove duplication caused by the addition of the new functionality. | |||
Make design changes to improve the overall solution. | |||
After each refactoring, rerun all the tests to ensure that they all still pass. | |||
*Repeat the cycle. Each cycle should be very short, and a typical hour should contain many Red/Green/ Refractor cycles. | |||
==Homework Grades Program== | ==Homework Grades Program== |
Revision as of 19:57, 23 October 2012
Introduction
Test driven development (TDD) is a process that tries to create the minimal amount of code to meet customer's expectations. The idea is to test first, code second, then improve (or refactor) last. This process forces the software developers to focus on customer specifications and validation first. Since at each step of the way the programmer proves to himself that the code meets specifications, TDD gives the programmer confidence. The rest of this chapter gives the motivation for TDD, shows the steps for TDD, outlines the principles of TDD, and provides examples using TDD.
Motivation for TDD
- Testing is the one activity that improves the quality of code. In various design scenarios, like waterfall model, testing occurs towards the end of the project development activity. As shown in the figure 1, introducing testing as a latter phase increases the cost incurred for implementing the changes. From this we infer that there is a high chance of reducing the costs by moving the Test phase to the initial part.
- The client can also be well informed about the design and can suggest changes which can be incorporated well in advance. This approach also known as TDD helps achieve flexibility to achieve the client’s ever-changing requirements
Principles
- Tests serve as examples of how to use a class or method. Once used to having tests that show how things work (and that they work), developers start inquiring if a test was already there for the piece of code to be developed
- Developer tests are distinctly different from QA (Quality Analysis) tests and should be kept separate. QA tests target features and treat the system as a black box. Unit tests created by the developer operate at a lower level and test different things.
- Name the tests carefully. For Example, name test packages like the package being tested with a suffix. For example, the "DataAccess" project/package is tested by "DataAccess.Test". Also, name test classes the same as the class under test with the suffix "Test". For example, the class "PrintManager" is tested by the test class "PrintManagerTest". This convention makes it easy to find the related class and keeps the class name a noun. You should also name test methods the same as the method being tested with the prefix "Test". For example, the method "PrintProductOrder()" is tested by the method "TestPrintProductOrder()". This convention keeps the method name a verb that reads as an english phrase.
- Write each test before writing the method under test. It encourages the developer to think as a user of the target method before thinking about implementation, which usually results in a cleaner, easier-to-use interface.
- Follow the "3-As" pattern for test methods: Arrange, Act, Assert. Specifically, use separate code paragraphs (groups of lines of code separated by a blank line) for each of the As. Arrange is variable declaration and initialization. Act is invoking the code under test. Assert is using the Assert.* methods to verify that expectations were met.
- When writing application code, only write enough code to make a test work. This technique prevents gold-plating and ensures that you always have a test for the code you write.
- When you find you need to refractor working code, refractor and re-test prior to writing new code. This technique ensures your refractoring is correct prior to adding new functionality and applies to creating new methods, introducing inheritance, everything.
Steps
Examples
Follow these steps:
- Understand the requirements of the story, work item, or feature that you are working on.
- Red:Create a test and make it fail.
Imagine how the new code should be called and write the test as if the code already existed. Create the new production code stub. Write just enough code so that it compiles. Run the test. It should fail. This is a calibration measure to ensure that your test is calling the correct code and that the code is not working by accident. This is a meaningful failure, and you expect it to fail.
- Green: Make the test pass by any means necessary.
Write the production code to make the test pass. Keep it simple. Some advocate the hard-coding of the expected return value first to verify that the test correctly detects success. This varies from practitioner to practitioner.
- If you've written the code so that the test passes as intended, you are finished. You do not have to write more code speculatively. If new functionality is still needed, then another test is needed. Make this one test pass and continue.
When the test passes, you might want to run all tests up to this point to build confidence that everything else is still working.
- Refractor: Change the code to remove duplication in your project and to improve the design while ensuring that all tests still pass.
Remove duplication caused by the addition of the new functionality. Make design changes to improve the overall solution. After each refactoring, rerun all the tests to ensure that they all still pass.
- Repeat the cycle. Each cycle should be very short, and a typical hour should contain many Red/Green/ Refractor cycles.
Homework Grades Program
Setup
As a simple example, we are creating a program that keeps track of our homework grades. We envision that we would be able to get the average of these homework grades. Step one: write a test . Let's test an average function
myHomework = new Homework(); myHomework.grades = [100, 50]; assert(myHomework.average(myHomework.grades) == 75);
We will get multiple errors - this test won't even compile (but, that's ok for now). Let's take a look at what will generate error messages:
- class
Homework
not declared Homework
constructor not declared- field
grades
not declared - method
average
not declared
Now, we fix the first error:
class Homework { }
Second error:
Homework(void) { }
Third error:
int * grades;
Fourth error:
int average(int * grades) { return 0; // default return value }
Finally, the test compiles! The code now looks like this:
class Homework { int * grades; Homework(void) { } int average(int * grades) { return 0; // default return value } }
Red
Now, we run the test, and the familar red bar of failure greets us (remember the mantra red-green-refactor). The assert fails. The average function needs to actually average something (not just return 0). As we think about averaging the grades, we realize we need to know how many grades are in the int array grades
. So, we add to the code:
class Homework { int * grades; int numGrades; // new Homework(void) { } int average(int * grades) { int avg = 0; // new for(int i = 0; i < numGrades; i++) { // new avg += grades[i]; // new } // new return avg/numGrades; // new } }
Of course, we must remember to change the test to:
myHomework = new Homework(); myHomework.grades = [100, 50]; myHomework.numGrades = 2; assert(myHomework.average(myHomework.grades) == 75);
Green
Success! We have a green bar when we run it.
Refactor
The last step is refactoring. Perhaps we don't want a grade to be an int
? Should it be an unsigned int
? For this simple example, there isn't much refactoring to do, but in a larger example there may be multiple areas for improvement.
More Examples
See Test-Driven Development by Example by Kent Beck for more examples.
Conclusion
Danielle
References
- Beck, K. (2002). Test-driven development by example. Addison-Wesley.
- (n.d.). Test-driven development. website: http://en.wikipedia.org/wiki/Test_driven_development
- (2012, January 11). Test driven development. website: http://c2.com/cgi/wiki?TestDrivenDevelopment
- Ambler, S. W. (2002). Introduction to test driven development (tdd). Retrieved from Agile Data website: http://www.agiledata.org/essays/tdd.html
- http://www.testdriven.com/ ?