CSC/ECE 517 Spring 2013/ch1 1a MK: Difference between revisions
Line 330: | Line 330: | ||
[15] http://www.itdivisioninc.com/IntegrationTesting.aspx | [15] http://www.itdivisioninc.com/IntegrationTesting.aspx | ||
[16] http://wiki.expertiza.ncsu.edu/index.php/CSC/ECE_517_Fall_2010/ch1_2e_RI |
Revision as of 03:53, 8 February 2013
Writing Meaningful Test case
Introduction
When testing an application, whether at an object or class level with unit testing, validating object interaction through integration testing, or in system test with various functional testing methods, many fail to execute against the most integral system components. Even with almost 100% code coverage, many of the test developed focus on assertions that return inaccurate test results. Writing meaningful test cases aims at developing test for the most valuable system components along with selecting proper assertions.
Unit Test
The purpose of the unit test process is to ensure each line of code for a module functions correctly to meet functional and technical requirements. In Test Driven Development, test code is developed before or alongside application code. Automated unit tests execute application code after it is built and provide reports on test results. Furthermore, unit tests are a critical tool for any developer. They allow us to quickly and easily test our code in a rerunnable, maintainable way.
Advantages and Limitations
Advantages
Unit testing provides the following advantages
- Catch bugs at time of implementation by testing as you develop. Small units of work are easier to code and debug. If you write all of your code and start testing when coding is complete, then testing will be much more difficult.
- Easily test changes in code. Because the unit test is designed to test the business case, if the technical implementation changes, the unit test can be rerun to ensure that the technical redesign has not changed the program's result.
- Prove to your supervisor and yourself that the code works after refactoring or adding functionality
- Verify 'corner' cases that may not be tested within the System Test phase
- Demonstrate a level of quality to the client
- Ensure that no other developer has undermined the quality of your code
Limitation
Software testing is a combinatorial problem. For example, every boolean decision statement requires at least two tests: one with an outcome of "true" and one with an outcome of "false". As a result, for every line of code written, programmers often need 3 to 5 lines of test code.[3] This obviously takes time and its investment may not be worth the effort. There are also many problems that cannot easily be tested at all – for example those that are nondeterministic or involve multiple threads. In addition, writing code for a unit test is as likely to be at least as buggy as the code it is testing.
Best Practices
Testing Trivial Code
Unit tests should test those segments of code that are likely to have defects when first developed, or are likely to have defects introduced when changed. Like all software development activities, there is a cost benefit analysis that can be applied to writing unit tests. For normal enterprise business software, it is not worthwhile to test trivial code.
Typical examples of trivial code in Java include simple getter and setter methods for properties and simple constructors.
class User { private String name; public String getName() { return name; } public void setName(String newName) { name = newName; } } class UserTest extends TestSuite { public void testNameProperty() { User user = new User(); assertNull(user.getName()); String testName = "test"; user.setName(testName); assertEquals(testName, user.getName()); } }
Using hardcoded values when checking test results
Some tests have a hardcoded value when checking the results of some operation. This value is often separately hardcoded in the application code being tested. When the value is changed in the application code, the test is guaranteed to fail.
class CustomerWebController { public String doOperationReturningNextPage(UserInput input) { // some random logic... return "newCustomer.jsp"; } } class CustomerWebControllerTest extends TestSuite { public void testDoOperation() { CustomerWebController controller = new CustomerWebController(); UserInput input = getInputForNewCustomer(); String result = controller.doOperationReturningNextPage(input); assertEquals("newCustomer.jsp", result); } }
The DRY Principle (Don't Repeat Yourself)
The fix for this is simply the application of the DRY principle: Don't Repeat Yourself (from the book The Pragmatic Programmer: From Journeyman to Master by Andrew Hunt and David Thomas). When you go to use a magic value more than once, define it as a constant (or method) and then refer to that constant (or method). To improve the example, we define a constant for the "newCustomer.jsp" value. class CustomerWebController {
public static final NEW_CUSTOMER_PAGE = "newCustomer.jsp"; public String doOperationReturningNextPage(UserInput input) { // some random logic... return NEW_CUSTOMER_PAGE; } } class CustomerWebControllerTest extends TestSuite { public void testDoOperation() { CustomerWebController controller = new CustomerWebController(); UserInput input = getInputForNewCustomer(); String result = controller.doOperationReturningNextPage(input); assertEquals(CustomerWebController.NEW_CUSTOMER_PAGE, result); } }
Being too dependent on specific test data
There is some debate whether unit tests should involve the database (purists tend to argue not), but in practice this is quite common and serves a useful purpose. However, problems often occur because such tests are overly dependent on specific data in the database. As new tests are written and new test data is added, this can cause existing, unrelated tests to fail.
class CustomerDataAccess { public List findCustomers(CustomerCriteria criteria) { // Logic to query database based on criteria return customersFound; } class CustomerDataAccessTest extends TestSuite { public void testFindCustomers { CustomerDataAccess customerDataAccess = new CustomerDataAccess(); CustomerCriteria criteria = new CustomerCriteria(); String firstNameToFind = "Bob"; criteria.firstNameEquals(firstNameToFind); List results = customerDataAcess.findCustomers(criteria); assertEquals(2, results.size()); } }
This example is based on real code that I came across. The problem is that the test expects there to be only two customers with a first name of 'Bob', which must have been the case when the test was first written and executed. However at some later date, working on some unrelated piece of code, a developer could add another customer named Bob, and suddenly this test will fail. To fix this type of situation in general, you need to minimize the number of assumptions you make about the test data. This reduces the level of coupling between the test and the data, which makes either easier to change independent of the other.
To improve this particular example we simply need to change the test to check that each result matches the criteria we specified.
class CustomerDataAccessTest extends TestSuite { public void testFindCustomers { CustomerDataAccess customerDataAccess = new CustomerDataAccess(); CustomerCriteria criteria = new CustomerCriteria(); String firstNameToFind = "Bob"; criteria.firstNameEquals(firstNameToFind); List results = customerDataAcess.findCustomers(criteria); for (Customer customer : results) { assertEquals(firstNameToFind, customer.getFirstName()); } } }
Code Coverage
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
http://en.wikipedia.org/wiki/Code_coverage
Unit Test Tools to Consider
Aunit - Ada, Check - C, cUnit C CUT C, C++, Objective-C works in embedded environments MinUnit C designed for embedded systems CppUnit C++ PalmUnit C++ for PalmOS applications QtUnit C++ for applications using the Qt library unit++ C++ HtmlUnit content - web sites Java API HttpUnit content - web sites Java API XMLUnit content - XML Java API XSLTunit content - XSLT EasyMock Java mock objects GrandTestAuto Java JTestCase Java JUnit add-on JUnit Java the parent of most xUnit frameworks JUnitEE Java JUnit add-on JUnitX Java JUnit add-on Mock Creator Java mock objects Mock Objects Java mock objects MockMaker Java mock objects Mockry Java mock objects jsAsserUnit JavaScript JsUnit (Hieatt) JavaScript author is Edward Hieatt, note two tools with the same name JsUnit (Schaible) JavaScript author is Jrg Schaible, note two tools with the same name TagUnit JSP custom tags within Java Server Pages LingoUnit Macromedia Director csUnit .NET dotunit .NET NUnit .NET ObjcUnit Objective-C OCUnit Objective-C Perl Test::MockObject Perl mock objects PerlUnit Perl phpAsserUnit PHP PhpUnit PHP PBUnit PowerBuilder PyUnit Python Ruby/Mock Ruby mock objects Ruby Test::Unit Ruby OTF Smalltalk SUnit Smalltalk the first xUnit framework, and the parent of JUnit vbUnit3 Basic Visual Basic also has a commercial version
Functional Test
The purpose of functional test is to measure the behavioral quality of a software application. Functional tests verify that the system response appropriately from the user perspective and functions according to the design requirements used to specify the application. The functional test must determine if each component of a software application performs in accordance to the specifications, responds correctly to all conditions that may be presented by incoming events, process incoming event correctly from one business event to the other, and produces appropriate outcome from incoming event.
Advantages
Functional has several benefits in order to develop a robust software application. Here are the advantages:
- It verifies that an application works as per specifications across multiple platforms, browsers and technologies.
- Make sure that a certain feature is completed from a user's point of view.
- A tester needs no knowledge of implementation, including specific programming languages in order to execute the test.
- Tester and programmer are independent from each other
- It will help to expose any ambiguities or inconsistencies in the specification
- Test cases can be designed as soon as the specification is completed.
Best Practices
Functional tests are the measurement for an application code completion. Because it make sure that all the feature and functionalities are working as expected from an end user perspective. Therefore it is important to design functional test that will expose any vulnerabilities and assure confidence in an application. Before writing a test, it is very important to understand how the software supposed to behave. Functional specification document will have all the details about the behavior of the application. Therefore the first thing to do before writing any test cases is to write the functional specification document. Once the specification is read, it is important to organize tests to make sure that it covers each and every features and functionality of the application. This will make sure that all the components are thoroughly tested and enable to identify any gaps in testing. Writing each test case before executing is a good practice so it is easy to identify what the expected behavior of test outcome and write down any preconditions. This will help to automate the functional test or can be give it to another person with no knowledge about the application behavior to run the test. The test cases also cover very rare situations. Software developers tend to develop application according it its specification. They hardly code the program to work on rare cases. Majority of the bugs are found when applications are used in rare input conditions. Therefore it is important to make sure that functional tests are created to test the boundary conditions, error cases, and rare input cases.
Writing functional tests
Writing functionality is very cumbersome and it is very easy to miss any functionality because an application may have several features and each feature has several functionalities. Therefore it is a good practice to organize the test cases by its features and functionalities so no feature and functionality will be missed. Here is a sample organization structure of functional test cases:
<Feature 1> <Function 1> <Test 1> <Test 1/> <Test 2> <Test 2/> … <Function 1/> <Feature 1/> <Feature 2> <Function 1> <Test 1> <Test 1/> … <Function 1/> … <Feature 2/>
Test cases can be executed by anybody if it is written in a meaningful way for anyone to understand. In order to create a meaningful test case, a test case should have the following components. First it should have a purpose statement on what it is intend to test. Second the test case should have a setup component prior for executing the test. Third it should have to steps to execute the test. Fourth it should have expected behavior to find out whether the test case is passed or failed. Finally it should have a cleanup procedure to make sure that the system is reverting back to the original state. Here is a sample of a test case:
<Test 1> <Purpose> “Purpose of the test case” <Purpose/> <Setup> “Steps to prepare the system prior to executing the test case” <Setup/> <Execution> “Steps required executing the test” <Execution/> <Expected Behavior> “Expected behavior of the test case” <Expected Behavior/> <Cleanup> “Steps required to cleaning up the environment to revert back” <Cleanup/> <Test 1/>
Writing functional test cases for 100% coverage is an impossible task and a tester can keep on writing functional test forever. Even when there are several functional tests are available, there arises a situation to complete functional testing in a short time to cover major functionalities and features. In order to hand this situation, each functional test should have a priority where the higher priority test will cover key features and lower priority test will cover less important feature of an application.
Functionality test tools
Functional tests are very large in size and it is very had to maintain and time consuming to execute them manually. Therefore it is a good practice to automate the test cases. There are several tools available to automate functional testing. Here are some lists of tools listed.
Quick Test Professional
The quick test professional tool enables to automate functional test through user interface. It detects objects in the user interface and performance desired operation by a mouse click of keyboard events. The tool also can be used to automate functional on Graphical and non-graphical user interface. It also allows collaborating similar object definitions in a single repository manager and enabled to be shared among other testers. It also can maintain automated and manual test cases in the same repository.
JFunc
JFunc is an open source functional testing tool that helps to automate functional test cases. JFunc is an extension form the JUnit test frame work. It enable to ease the creation of manual functional test suite construction, multiple test failures so the functional test will not exit after the first failure, enhance verbose assertions so a failure will have more detail about the test failure and error messages, and pass new arguments so each time it will not use the same input values.
Waitir
Waitir is a web application testing tool with ruby scripting language. Waitir tests all the web application on all web browsers on different platforms. It distinguishes from other web application functional testing tool by executing tests at the web browser layer by driving a web browser and interacting with objects on a web page which is more accurate as manual functional testing done by human interaction. Here is a sample test case that validates the search operation:
#------------------------------------------------------------------------------------------------------------------# # Purpose: To Validate the search capability of the browser. #------------------------------------------------------------------------------------------------------------------# #------------------------------------------------------------------------------------------------------------------# # Setup: #------------------------------------------------------------------------------------------------------------------# require "watir" test_site = "http://www.google.com" browser = Watir::Browser.new browser.goto test_site browser.text_field(:name, "q").set "pickaxe" #------------------------------------------------------------------------------------------------------------------# # Execution: #------------------------------------------------------------------------------------------------------------------# browser.button(:name, "btnG").click # "btnG" is the name of the Search button #-------------------------------------------------------------# # Expected Behavior: “A Google page with results should be shown. 'Programming Ruby' should be high on the list." #------------------------------------------------------------------------------------------------------------------# if browser.text.include? "Programming Ruby" puts " Test Passed. Found the test string: 'Programming Ruby'. Actual Results match Expected Results." else puts " Test Failed! Could not find: 'Programming Ruby'." End #------------------------------------------------------------------------------------------------------------------# #Cleanup: #------------------------------------------------------------------------------------------------------------------# browser.close
Integration Test
The purpose of integration testing is to test the combination of individual components working together. This type of testing can expose faults that result from the interaction between the integrated components.
Various Approaches to Integration Testing
- Big Bang - All or most of the individual components are combined and tested at one time. This approach is typically utilized when a test team receives an entire software bundle.
- Top Down - Top level components are tested initially. Lower level components are tested subsequently in a step by step method. This approach is used whenever development is taking the same type of approach. In this approach, test stubs are needed to simulate lower level components that are not yet available.
- Bottom Up - Bottom level components are tested initially. This approach is used whenever development is taking the same type of approach. In this approach, test stubs are needed to simulate higher level components that are not yet available.
- Sandwich/Hybrid - This approach is a combination of the Top Down and Bottom up approaches.
Advantages to Each Approach
- Big Bang
- Convenient for smaller systems.
- Quick.
- Cheap.
- No stubs or stand-in objects are needed.
- Top Down
- Potential for early identification of major flaws near the top of the product.
- Critical components are tested on priority.
- Easier to isolate root cause of interface errors due to the incremental approach.
- Bottom Up
- Potential for early identification of major flaws near the bottom of the product.
- Easier to create test conditions.
- No need to wait for all modules to be developed.
- Each component and unit gets tested for correctness before being integrated.
- Typically results in a more robust system.
- Sandwich/Hybrid
- Useful for larger projects with several subprojects.
Best Practices
- Create only the integration tests needed. These tests offer great value at the cost of a great amount of work to properly set up and time to execute. Consider only testing the default scenarios. There should be enough testing to validate that critical or high-severity defects no longer exist.
- Do not depend on specific data to be available to the test. Always have any necessary data created prior to the execution of a test. Anyone with proper access could delete or modify test data and break a test, which is another reason it should not be assumed to be available.
- Use multiple asserts for each test. Due to the time consumption of integration tests, it is recommended that tests be consolidated. In this type of testing, it is considered better to ignore following a one assert per test rule.
- When seeking to validate the same functionality and there is an option between creating a unit test or an integration test, always choose the unit test. Unit tests will run faster and be easier to set up.
Performance Test
The purpose of performance testing is to determine speed and effectiveness. Various quantitative and qualitative attributes may be used. Examples of quantitative attributes include response time and number of MIPS (millions of instructions per second). Examples of qualitative attributes include scalability, interoperability, and reliability.
Performance testing can be used to determine the speed of a specific aspect of a system with a specific workload. This allows for the identification of poor performance areas and for the establishment of acceptable thresholds required to maintain acceptable response time.
There are several types of performance testing. Understanding the various types helps to minimize cost, reduce risk, and to know when it is appropriate to apply which type of test on a project.
Key Types of Performance Testing & Their Advantages
- Performance test - Determines speed, stability, and/or scalability.
- Focuses on determining user satisfaction with regards to performance.
- Identifies differences between the expectations and reality of existing performance.
- Supports optimization and capacity planning.
- Load test - Verifies application behavior under both normal and peak load conditions.
- The hardware environment is evaluated for adequacy.
- Detects concurrency issues.
- Detects functional errors that occur under load.
- Supports determination of maximum simultaneous users prior to performance being compromised.
- Supports determination of maximum load before limits of resource utilization are exceeded.
- Stress test - Determines behavior when conditions exceed normal or peak load conditions.
- Identifies if over-stressing the system can corrupt data.
- Supports establishment of application-monitoring triggers that can warn of impending failures.
- Determines side effects of failures related to hardware or applications.
- Identifies the kinds of failures to plan for.
- Capacity test - Determines the amount of users and/or transactions that can be supported while continuing to meet performance goals.
- Provides capacity data that can be used to validate or enhance models.
- Determines current usage and capacity of the system.
- Provides data on capacity and usage trends of the system.
Best Practices
- Test the code with the same granularity as used for unit tests.
- Do not perform lots of assertions on the test results.
- Test enough to measure statistically significant performance differences.
- Ideal performance tests should run relatively fast.
- Test setup should be performed independent of the actual test method.
References
[1] http://codebetter.com/blogs/jeremy.miller/archive/2005/07/20/129552.aspx
[2] http://www.javaworld.com/javaworld/jw-03-2009/jw-03-good-unit-tests-1.html
[3] http://en.wikipedia.org/wiki/Unit_testing
[4] http://wiki.developerforce.com/index.php/How_to_Write_Good_Unit_Tests
[5] http://msdn.microsoft.com/en-us/magazine/cc163665.aspx
[6] http://www.tejasconsulting.com/open-testware/feature/unit-test-tool-survey.html
[7] http://www.softwaresummit.com/2004/speakers/SteltingTestingJ2EE.pdf
[8] http://users.csc.calpoly.edu/~cstaley/General/TestingHowTo.htm
[9] http://www.aptest.com/resources.html
[10] http://performance-testing.org/performance-testing-definitions
[11] http://msdn.microsoft.com/en-us/library/bb924357.aspx
[12] http://www.mantidproject.org/Writing_Performance_Tests#Best_Practice_Advice
[13] http://softwaretestingfundamentals.com/integration-testing/
[14] http://msdn.microsoft.com/en-us/library/vstudio/hh323698(v=vs.100).aspx#erg
[15] http://www.itdivisioninc.com/IntegrationTesting.aspx
[16] http://wiki.expertiza.ncsu.edu/index.php/CSC/ECE_517_Fall_2010/ch1_2e_RI