CSC/ECE 517 Spring 2013/ch1 1a MK: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
 
(61 intermediate revisions by 2 users not shown)
Line 4: Line 4:


== Unit Test ==
== Unit Test ==
The purpose of the unit test process is to ensure each line of code for a module functions correctly to meet functional and technical requirements. In Test Driven Development, test code is developed before or alongside application code. Automated unit tests execute application code after it is built and provide reports on test results.  Furthermore, unit tests are a critical tool for any developer. They allow us to quickly and easily test our code in a rerunnable, maintainable way.
The purpose of the unit test process is to ensure each line of code for a module functions correctly to meet functional and technical requirements. In Test Driven Development, test code is developed before or alongside application code. Automated unit tests execute application code after it is built and provide reports on test results.  Furthermore, unit tests are a critical tool for any developer. They allow developers to quickly and easily test code in a repeatable and maintainable way. [http://www.javaworld.com/javaworld/jw-03-2009/jw-03-good-unit-tests-1.html (2)]


=== Advantages and Limitations ===
=== Advantages and Limitations ===
Line 14: Line 14:
* Verify 'corner' cases that may not be tested within the System Test phase.
* Verify 'corner' cases that may not be tested within the System Test phase.
* Demonstrate a level of quality to the client.
* Demonstrate a level of quality to the client.
* Ensure that no other developer has undermined the quality of your code.
* Ensure that no other developer has undermined the quality of the code.


==== Limitation ====
==== Limitation ====
Software testing is a combinatorial problem. For example, every boolean decision statement requires at least two tests: one with an outcome of "true" and one with an outcome of "false". As a result, for every line of code written, programmers often need 3 to 5 lines of test code.[3] This obviously takes time and its investment may not be worth the effort. There are also many problems that cannot easily be tested at all – for example those that are nondeterministic or involve multiple threads. In addition, writing code for a unit test is as likely to be at least as buggy as the code it is testing.
Software testing is a combinatorial problem. For example, every boolean decision statement requires at least two tests: one with an outcome of "true" and one with an outcome of "false". As a result, for every line of code written, programmers often need 3 to 5 lines of test code.[http://en.wikipedia.org/wiki/Unit_testing (3)] This obviously takes time and its investment may not be worth the effort. There are also many problems that cannot easily be tested.  For example, those that are nondeterministic or involve multiple threads. In addition, writing code for a unit test is as likely as the source code to be at least as buggy as the code it is testing.


=== Best Practices ===
=== Best Practices ===
[http://wiki.developerforce.com/index.php/How_to_Write_Good_Unit_Tests (4)]
[http://msdn.microsoft.com/en-us/magazine/cc163665.aspx (5)]
==== Testing Trivial Code ====
==== Testing Trivial Code ====
Unit tests should test those segments of code that are likely to have defects when first developed, or are likely to have defects introduced when changed. Like all software development activities, there is a cost benefit analysis that can be applied to writing unit tests. For normal enterprise business software, it is not worthwhile to test trivial code.
Unit tests should test those segments of code that are likely to have defects when first developed, or are likely to have defects introduced when changed. Like all software development activities, there is a cost benefit analysis that can be applied to writing unit tests. For normal enterprise business software, it is not worthwhile to test trivial code.
Line 49: Line 51:


==== Using hardcoded values when checking test results ====
==== Using hardcoded values when checking test results ====
Some tests have a hardcoded value when checking the results of some operation. This value is often separately hardcoded in the application code being tested. When the value is changed in the application code, the test is guaranteed to fail.
Some tests have a hardcoded value when checking the results of some operation. This value is often separately hardcoded in the application code being tested. When the value is changed in the application code, the test is guaranteed to fail. [http://wiki.developerforce.com/index.php/How_to_Write_Good_Unit_Tests (4)]


  class CustomerWebController {
  class CustomerWebController {
Line 72: Line 74:




class CustomerWebController {


  public static final NEW_CUSTOMER_PAGE = "newCustomer.jsp";
  class CustomerWebController {
  public String doOperationReturningNextPage(UserInput input) {
  public static final NEW_CUSTOMER_PAGE = "newCustomer.jsp";
    // some random logic...
    public String doOperationReturningNextPage(UserInput input) {
    return NEW_CUSTOMER_PAGE;
      // some random logic...
  }
      return NEW_CUSTOMER_PAGE;
    }
  }
  }
   
   
  class CustomerWebControllerTest extends TestSuite {
  class CustomerWebControllerTest extends TestSuite {
Line 91: Line 94:


==== Being too dependent on specific test data ====
==== Being too dependent on specific test data ====
There is some debate whether unit tests should involve the database (purists tend to argue not), but in practice this is quite common and serves a useful purpose. However, problems often occur because such tests are overly dependent on specific data in the database. As new tests are written and new test data is added, this can cause existing, unrelated tests to fail.
There is some debate on whether or not unit tests should involve the database.  In practice, this is actually quite common and serves a useful purpose. However, problems often occur because such tests are overly dependent on specific data in the database. As new tests are written and new test data is added, this can cause existing tests to fail regardless of if they are related.


  class CustomerDataAccess {
  class CustomerDataAccess {
Line 110: Line 113:
  }
  }


This example is based on real code that I came across. The problem is that the test expects there to be only two customers with a first name of 'Bob', which must have been the case when the test was first written and executed. However at some later date, working on some unrelated piece of code, a developer could add another customer named Bob, and suddenly this test will fail. To fix this type of situation in general, you need to minimize the number of assumptions you make about the test data. This reduces the level of coupling between the test and the data, which makes either easier to change independent of the other.
The following example code has a problem in that the test expects there to be only two customers with a first name of 'Bob', which must have been the case when the test was first written and executed. However, a developer could add another customer named 'Bob' at any point in time while unaware of the existing code.  This could suddenly cause this test to fail. The general fix for situations such as this it to minimize the number of assumptions you make about the test data. This reduces the level of coupling between the test and the data, which makes either easier to change independently of the other.


To improve this particular example we simply need to change the test to check that each result matches the criteria we specified.
To improve this particular example, we simply need to change the test to check that each result matches the criteria we specified.


  class CustomerDataAccessTest extends TestSuite {
  class CustomerDataAccessTest extends TestSuite {
Line 128: Line 131:


=== Code Coverage ===
=== Code Coverage ===
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.  
This is an analysis method that determines which parts of the software have been executed (covered) by the test case suite.  It also determines which parts have not been executed and may require additional attention.  


http://en.wikipedia.org/wiki/Code_coverage
http://en.wikipedia.org/wiki/Code_coverage
Line 145: Line 148:
*Keep configuration settings separate from the unit tests.   
*Keep configuration settings separate from the unit tests.   
*Use clear and consistent naming for all unit tests.
*Use clear and consistent naming for all unit tests.
[http://blog.stevensanderson.com/2009/08/24/writing-great-unit-tests-best-and-worst-practises/ (17)]


=== Effective Use of Assertions ===
=== Effective Use of Assertions ===
Line 165: Line 170:
*Total unit tests should cover the functional requirements of the code at a minimum.
*Total unit tests should cover the functional requirements of the code at a minimum.


=== Brief Description of Available Unit Testing Tools ===
[http://users.csc.calpoly.edu/~cstaley/General/TestingHowTo.htm (8)][http://ubiquity.acm.org/article.cfm?id=358976 (18)]
*GrandTestAuto
 
=== Brief Description of Some Available Unit Testing Tools ===
[http://www.tejasconsulting.com/open-testware/feature/unit-test-tool-survey.html (6)][http://www.aptest.com/resources.html (9)]
*GrandTestAuto [http://grandtestauto.org/ (19)]
**Enables completely automated testing of Java software.  
**Enables completely automated testing of Java software.  
**More advanced than JUnit.   
**More advanced than JUnit.   
Line 172: Line 180:
**Supports distribution of tests across a network.  
**Supports distribution of tests across a network.  
**Integrates with additional tools.   
**Integrates with additional tools.   
*JUnit
*JUnit [http://www.softwaresummit.com/2004/speakers/SteltingTestingJ2EE.pdf (7)][http://www.javapractices.com/topic/TopicAction.do?Id=33 (20)]
**Enables automated testing of Java software.  
**Enables automated testing of Java software.  
**Simple to use.   
**Simple to use.   
**Makes it easy to identify test failures.   
**Makes it easy to identify test failures.   
**Open source tool.   
**Open source tool.   
*NUnit  
*NUnit [http://nunit.org/ (21)]
**Provides a unit testing framework for .NET software.   
**Provides a unit testing framework for .NET software.   
**Takes advantage of existing .NET language features.
**Takes advantage of existing .NET language features.
 
=== Listing of Other Unit Testing Tools Available ===
Aunit - Ada, Check - C, cUnit C CUT C, C++, Objective-C works in embedded environments MinUnit C designed for embedded systems CppUnit C++ PalmUnit C++ for PalmOS applications QtUnit C++ for applications using the Qt library unit++ C++ HtmlUnit content - web sites Java API HttpUnit content - web sites Java API XMLUnit content - XML Java API XSLTunit content - XSLT EasyMock Java mock objects GrandTestAuto Java JTestCase Java JUnit add-on JUnit Java the parent of most xUnit frameworks JUnitEE Java JUnit add-on JUnitX Java JUnit add-on Mock Creator Java mock objects Mock Objects Java mock objects MockMaker Java mock objects Mockry Java mock objects jsAsserUnit JavaScript JsUnit (Hieatt) JavaScript author is Edward Hieatt, note two tools with the same name JsUnit (Schaible) JavaScript author is Jrg Schaible, note two tools with the same name TagUnit JSP custom tags within Java Server Pages LingoUnit Macromedia Director csUnit .NET dotunit .NET NUnit .NET ObjcUnit Objective-C OCUnit Objective-C Perl Test::MockObject Perl mock objects PerlUnit Perl phpAsserUnit PHP PhpUnit PHP PBUnit PowerBuilder PyUnit Python Ruby/Mock Ruby mock objects Ruby Test::Unit Ruby OTF Smalltalk SUnit Smalltalk the first xUnit framework, and the parent of JUnit vbUnit3 Basic Visual Basic also has a commercial version


== Functional Test ==
== Functional Test ==
The purpose of functional test is to measure the behavioral quality of a software application. Functional tests verify that the system response appropriately from the user perspective and functions according to the design requirements used to specify the application. The functional test must determine if each component of a software application performs in accordance to the specifications, responds correctly to all conditions that may be presented by incoming events, process incoming event correctly from one business event to the other, and produces appropriate outcome from incoming event.
The purpose of functional test is to measure the behavioral quality of a software application. Functional tests verify that the system responds appropriately from the user perspective and functions according to the design requirements used to specify the application. The functional test must determine if each component of a software application performs in accordance to the specifications, responds correctly to all conditions that may be presented by incoming events, processes incoming events correctly from one business event to the other, and produces an appropriate outcome from incoming events.


=== Advantages ===
=== Advantages ===
Line 197: Line 202:


=== Best Practices ===
=== Best Practices ===
Functional tests are the measurement for an application code completion. Because it make sure that all the feature and functionalities are working as expected from an end user perspective. Therefore it is important to design functional test that will expose any vulnerabilities and assure confidence in an application. Before writing a test, it is very important to understand how the software supposed to behave. Functional specification document will have all the details about the behavior of the application. Therefore the first thing to do before writing any test cases is to write the functional specification document. Once the specification is read, it is important to organize tests to make sure that it covers each and every features and functionality of the application. This will make sure that all the components are thoroughly tested and enable to identify any gaps in testing. Writing each test case before executing is a good practice so it is easy to identify what the expected behavior of test outcome and write down any preconditions. This will help to automate the functional test or can be give it to another person with no knowledge about the application behavior to run the test. The test cases also cover very rare situations. Software developers tend to develop application according it its specification. They hardly code the program to work on rare cases. Majority of the bugs are found when applications are used in rare input conditions.  Therefore it is important to make sure that functional tests are created to test the boundary conditions, error cases, and rare input cases.
Functional tests are the measurement for an application's code completion. It makes sure that all the feature and functionalities are working as expected from an end user perspective. Therefore, it is important to design functional tests that will expose any vulnerabilities and assure confidence in an application. Before writing a test, it is very important to understand how the software is supposed to behave. Functional specification documents will have all the details about the behavior of the application. The first thing to do before writing any test cases is to write the functional specification document. Once the specification is read, it is important to organize tests to make sure that they cover each and every feature and functionality of the application. This will make sure that all the components are thoroughly tested and enable identify any gaps in testing. Test cases should be written prior to developing functionality. This will lead to identification of preconditions and establish expected behavior. Additionally, this will help to automate the functional tests and enables another person to run the test regardless of their knowledge of the expected behavior. The test cases should also cover very rare situations. Software developers tend to develop applications strictly according to its specification. They may not appropriately code the program to work for rare cases. A majority of the bugs are found when applications are used in rare input conditions.  Therefore, it is important to make sure that functional tests are created to test boundary conditions, error cases, and rare input cases.


=== Ensuring Effectiveness and Efficiency Through the Functional Testing Lifecycle ===
=== Ensuring Effectiveness and Efficiency Through the Functional Testing Lifecycle ===
Line 212: Line 217:
**Gauge the test progress and quality of the testing.  
**Gauge the test progress and quality of the testing.  
**Make continuous improvements as needed.
**Make continuous improvements as needed.
[http://www.infosys.com/engineering-services/white-papers/Documents/functional-testing.pdf (22)]


=== Writing functional tests ===
=== Writing functional tests ===
Writing functionality is very cumbersome and it is very easy to miss any functionality because an application may have several features and each feature has several functionalities. Therefore it is a good practice to organize the test cases by its features and functionalities so no feature and functionality will be missed. Here is a sample organization structure of functional test cases:
An application may have several features and each of those features have several functionalities of their own. With a vast number of overall functionalities, there is a possibility that a functionality may be overlooked.  Therefore, it is a good practice to organize test cases by their features and functionalities. This can help ensure that no feature or functionality is missed.  
 
The following is a sample organization structure of functional test cases:


  <Feature 1>
  <Feature 1>
Line 234: Line 243:
  <Feature 2/>
  <Feature 2/>


Test cases can be executed by anybody if it is written in a meaningful way for anyone to understand. In order to create a meaningful test case, a test case should have the following components. First it should have a purpose statement on what it is intend to test. Second the test case should have a setup component prior for executing the test. Third it should have to steps to execute the test. Fourth it should have expected behavior to find out whether the test case is passed or failed.  Finally it should have a cleanup procedure to make sure that the system is reverting back to the original state. Here is a sample of a test case:
Test cases can be executed by anybody if they are written in a meaningful way for anyone to understand. In order to create a meaningful test case, a test case should have specific components. First, it should have a purpose statement on what it is intended to test. Second, the test case should have a setup component that is run prior to executing the test. Third, it should have steps to execute the test. Fourth, it should have expected behavior to compare against for the determination of if the test passed or failed.  Finally, it should have a cleanup procedure to make sure that the system is reverting back to the original state.  
 
Here is a sample of a test case:


  <Test 1>
  <Test 1>
Line 254: Line 265:
  <Test 1/>
  <Test 1/>


Writing functional test cases for 100% coverage is an impossible task and a tester can keep on writing functional test forever. Even when there are several functional tests are available, there arises a situation to complete functional testing in a short time to cover major functionalities and features. In order to hand this situation, each functional test should have a priority where the higher priority test will cover key features and lower priority test will cover less important feature of an application.
Writing functional test cases for 100% coverage can be a seemingly impossible task that would require a tester to continue writing functional test forever. Even when there are several functionalities requiring testing, there can be situations in which the functional testing of major functionalities and features must be completed in a short amount of time. This type of situation can be handled by assigning priorities to each functional test.  Higher priority tests should cover key features.  Lower priority tests should cover less important features.


=== Functionality test tools ===
=== Functionality test tools ===
Functional tests are very large in size and it is very had to maintain and time consuming to execute them manually. Therefore it is a good practice to automate the test cases. There are several tools available to automate functional testing. Here are some lists of tools listed.
Functional tests can become very large in size and that can make them difficult to maintain and time-consuming to execute manually. Therefore, it is a good practice to automate the test cases. Automating tests can reduce the amount of time between notifications of problem introductions and having such problems addressed.  [http://dmcnulla.wordpress.com/2012/01/28/good-practices-for-automating-functional-tests/ (1)] There are several tools available that automate functional testing. Here are some lists of some of the tools available:


==== Quick Test Professional ====
==== Quick Test Professional ====
The quick test professional tool enables to automate functional test through user interface. It detects objects in the user interface and performance desired operation by a mouse click of keyboard events. The tool also can be used to automate functional on Graphical and non-graphical user interface. It also allows collaborating similar object definitions in a single repository manager and enabled to be shared among other testers. It also can maintain automated and manual test cases in the same repository.
The Quick Test Professional tool enables the automation of functional tests through an user interface. It detects objects in the user interface and performs desired operations simulating a mouse click or keyboard events. The tool can also be used to automate functional tests on graphical and non-graphical user interfaces. It allows collaboration through the storage of similar object definitions in a single repository manager, which can be shared among other testers. The same repository can maintain automated and manual test cases.


==== JFunc ====
==== JFunc ====
JFunc is an open source functional testing tool that helps to automate functional test cases.  JFunc is an extension form the JUnit test frame work. It enable to ease the creation of manual functional test suite construction, multiple test failures so the functional test will not exit after the first failure, enhance verbose assertions so a failure will have more detail about the test failure and error messages, and pass new arguments so each time it will not use the same input values.
JFunc is an open-source functional testing tool that helps to automate functional test cases.  JFunc is an extension of the JUnit test framework. It serves to ease the creation of manual functional test suites. JFunc can handle multiple test failures by allowing the functional test to not exit after the first failure.  Verbose assertions have more detail about the test failures and error messages.  New arguments can be passed into a test each time it runs.


==== Waitir ====
==== Waitir ====
Waitir is a web application testing tool with ruby scripting language. Waitir tests all the web application on all web browsers on different platforms. It distinguishes from other web application functional testing tool by executing tests at the web browser layer by driving a web browser and interacting with objects on a web page which is more accurate as manual functional testing done by human interaction. Here is a sample test case that validates the search operation:
Waitir is a web application testing tool for the Ruby language. Waitir tests the web application on all web browsers on different platforms. It distinguishes from other web application functional testing tools by executing tests at the web browser layer and by driving a web browser and the interacting with objects on a web page.  This is more accurate than manual functional testing. Here is a sample test case that validates the search operation:


  #------------------------------------------------------------------------------------------------------------------#
  #------------------------------------------------------------------------------------------------------------------#
Line 297: Line 308:


=== Additional Functional Testing Tools ===
=== Additional Functional Testing Tools ===
*Arbiter
*Arbiter [http://arbiter.sourceforge.net/ (24)]
**Document-based acceptance testing.  
**Document-based acceptance testing.  
**Driven by requirements documents in Microsoft Word or RTF file formats.   
**Driven by requirements documents in Microsoft Word or RTF file formats.   
**Requirements are used to establish a glossary and test suite.   
**Requirements are used to establish a glossary and test suite.   
*Blerby Test Runner
*Blerby Test Runner [https://github.com/tmpvar/Blerby-Test-Runner (25)]
**Ajax test runner for php software.  
**Ajax test runner for php software.  
**Allows for instant feedback while performing on-the-fly code refactoring.   
**Allows for instant feedback while performing on-the-fly code refactoring.   
**Tracks test dependencies.
**Tracks test dependencies.
**Automatically re-runs affected tests when corresponding code changes.
**Automatically re-runs affected tests when corresponding code changes.
*Concordion
*Concordion [http://www.concordion.org/ (26)]
**Open source framework for testing Java software.  
**Open source framework for testing Java software.  
**Allows for plain English descriptions of requirements to be converted into automated tests.   
**Allows for plain English descriptions of requirements to be converted into automated tests.   
**Specifications are linked to the software system itself and prevents tests from becoming outdated.  
**Specifications are linked to the software system itself and prevents tests from becoming outdated.  
**Notifications inform when a change in system behavior causes associated test to fail.   
**Notifications inform when a change in system behavior causes associated test to fail.   
*Eclipse Jubula
*Eclipse Jubula [http://www.eclipse.org/jubula/ (27)]
**Provides automated UI functional testing for HTML and Java applications.   
**Provides automated UI functional testing for HTML and Java applications.   
**Aimed at creating tests from the user perspective.   
**Aimed at creating tests from the user perspective.   
**Limited coding efforts required.
**Limited coding efforts required.
*Robotium
*Robotium [http://code.google.com/p/robotium/ (28)]
**Test framework to write black-box and white-box tests for Android applications.  
**Test framework to write black-box and white-box tests for Android applications.  
**Requires test case suite to be installed on the same device or simulator as the application.   
**Requires test case suite to be installed on the same device or simulator as the application.   
**Access the application to execute tests scenarios in a real environment.
**Access the application to execute tests scenarios in a real environment.
[http://www.opensourcetesting.org/functional.php (23)]


== Integration Test ==  
== Integration Test ==  
The purpose of integration testing is to test the combination of individual components working together.  This type of testing can expose faults that result from the interaction between the integrated components.   
The purpose of integration testing is to test the combination of individual components working together.  This type of testing can expose faults that result from the interaction between the integrated components.  [http://softwaretestingfundamentals.com/integration-testing/ (13)]


=== Various Approaches to Integration Testing ===
=== Various Approaches to Integration Testing ===
Line 346: Line 359:
**Typically results in a more robust system.   
**Typically results in a more robust system.   
*Sandwich/Hybrid
*Sandwich/Hybrid
**Useful for larger projects with several subprojects.  
**Useful for larger projects with several subprojects.
 
[http://www.itdivisioninc.com/IntegrationTesting.aspx (15)]


=== Best Practices ===
=== Best Practices ===
Line 353: Line 368:
*Use multiple asserts for each test.  Due to the time consumption of integration tests, it is recommended that tests be consolidated.  In this type of testing, it is considered better to ignore following a one assert per test rule.   
*Use multiple asserts for each test.  Due to the time consumption of integration tests, it is recommended that tests be consolidated.  In this type of testing, it is considered better to ignore following a one assert per test rule.   
*When seeking to validate the same functionality and there is an option between creating a unit test or an integration test, always choose the unit test.  Unit tests will run faster and be easier to set up.
*When seeking to validate the same functionality and there is an option between creating a unit test or an integration test, always choose the unit test.  Unit tests will run faster and be easier to set up.
[http://msdn.microsoft.com/en-us/library/vstudio/hh323698(v=vs.100).aspx#erg (14)]


=== Available Integration Testing Tools ===
=== Available Integration Testing Tools ===
*eggPlant
*eggPlant [http://www.testplant.com/products/eggplant/ (30)]
**Focused on testing the user experience.   
**Focused on testing the user experience.  [http://www.testplant.com/blog/2011/06/09/integration-testing/ (29)]
**Black-box test automation tool.  
**Black-box test automation tool.  
**Low overhead.
**Low overhead.
**Non-invasive.  Not required to run on the same system under test.   
**Non-invasive.  Not required to run on the same system under test.   
**Image capture and search techniques prevent most UI changes from affecting existing tests.
**Image capture and search techniques prevent most UI changes from affecting existing tests.
*Ruby Capybara
*Ruby Capybara [https://www.ruby-toolbox.com/projects/capybara (31)]
**Designed for Ruby on Rails applications.  
**Designed for Ruby on Rails applications.  
**Tests rack-based web applications.   
**Tests rack-based web applications.   
**Simulates how a user interacts with a website.   
**Simulates how a user interacts with a website.   
*Selenium
*Selenium [http://seleniumhq.org/ (33)]
**Provides automated integration testing for Java web applications.  
**Provides automated integration testing for Java web applications. [http://www.developer.com/java/web/article.php/3872691/Selenium-Automated-Integration-Testing-for-Java-Web-Apps.htm (32)]
**Provides automation-aided exploratory testing.   
**Provides automation-aided exploratory testing.   
**Distributes testing across many environments.   
**Distributes testing across many environments.   
Line 372: Line 389:


== Performance Test ==
== Performance Test ==
The purpose of performance testing is to determine speed and effectiveness.  Various quantitative and qualitative attributes may be used.  Examples of quantitative attributes include response time and number of MIPS (millions of instructions per second).  Examples of qualitative attributes include scalability, interoperability, and reliability.   
The purpose of performance testing is to determine speed and effectiveness.  Various quantitative and qualitative attributes may be used.  Examples of quantitative attributes include response time and number of MIPS (millions of instructions per second).  Examples of qualitative attributes include scalability, interoperability, and reliability.  [http://performance-testing.org/performance-testing-definitions (10)]


Performance testing can be used to determine the speed of a specific aspect of a system with a specific workload.  This allows for the identification of poor performance areas and for the establishment of acceptable thresholds required to maintain acceptable response time.   
Performance testing can be used to determine the speed of a specific aspect of a system with a specific workload.  This allows for the identification of poor performance areas and for the establishment of acceptable thresholds required to maintain acceptable response time.   
Line 378: Line 395:
There are several types of performance testing.  Understanding the various types helps to minimize cost, reduce risk, and to know when it is appropriate to apply which type of test on a project.   
There are several types of performance testing.  Understanding the various types helps to minimize cost, reduce risk, and to know when it is appropriate to apply which type of test on a project.   


=== Key Types of Performance Testing & Their Advantages ===
=== Key Types of Performance Testing & Their Advantages ===  
*Performance test - Determines speed, stability, and/or scalability.
*Performance test - Determines speed, stability, and/or scalability.
**Focuses on determining user satisfaction with regards to performance.  
**Focuses on determining user satisfaction with regards to performance.  
Line 398: Line 415:
**Determines current usage and capacity of the system.  
**Determines current usage and capacity of the system.  
**Provides data on capacity and usage trends of the system.
**Provides data on capacity and usage trends of the system.
[http://msdn.microsoft.com/en-us/library/bb924357.aspx (11)]


=== Best Practices ===
=== Best Practices ===
Line 405: Line 424:
*Ideal performance tests should run relatively fast.  
*Ideal performance tests should run relatively fast.  
*Test setup should be performed independent of the actual test method.
*Test setup should be performed independent of the actual test method.
[http://www.mantidproject.org/Writing_Performance_Tests#Best_Practice_Advice (12)]


=== Available Performance Testing Tools ===
=== Available Performance Testing Tools ===
Line 435: Line 456:
**Enables identification of bottlenecks.   
**Enables identification of bottlenecks.   
**Diagnostic tools may be applied to resolve performance issues.
**Diagnostic tools may be applied to resolve performance issues.
[http://www.toolsjournal.com/tools-world/item/156-top-performance-testing-tools (34)]


== References ==  
== References ==  
[1] http://codebetter.com/blogs/jeremy.miller/archive/2005/07/20/129552.aspx
[1] http://dmcnulla.wordpress.com/2012/01/28/good-practices-for-automating-functional-tests/


[2] http://www.javaworld.com/javaworld/jw-03-2009/jw-03-good-unit-tests-1.html
[2] http://www.javaworld.com/javaworld/jw-03-2009/jw-03-good-unit-tests-1.html

Latest revision as of 03:45, 15 February 2013

Writing Meaningful Test case

Introduction

When testing an application, there are several testing methods that can be followed. Ideally, an application will be tested using all appropriate methods of testing. This document details four types of testing. Unit testing deals with tests at an object or class level. Functional testing verifies the behavior of the application. Integration testing validates object interaction. Performance tests focus on the responsiveness and scalability of an application. A commonality of all these testing methods is that the test results are only as valuable as the tests are meaningful. Even with almost 100% code coverage, many tests could have assertions returning inaccurate results. Understanding the testing methods is key to writing meaningful tests cases aimed at testing the most valuable system components with industry best practices.

Unit Test

The purpose of the unit test process is to ensure each line of code for a module functions correctly to meet functional and technical requirements. In Test Driven Development, test code is developed before or alongside application code. Automated unit tests execute application code after it is built and provide reports on test results. Furthermore, unit tests are a critical tool for any developer. They allow developers to quickly and easily test code in a repeatable and maintainable way. (2)

Advantages and Limitations

Advantages

Unit testing provides the following advantages

  • Catch bugs at time of implementation by testing as you develop. Small units of work are easier to code and debug. If you write all of your code and start testing when coding is complete, then testing will be much more difficult.
  • Easily test changes in code. Because the unit test is designed to test the business case, if the technical implementation changes, the unit test can be rerun to ensure that the technical redesign has not changed the program's result.
  • Prove to your supervisor and yourself that the code works after refactoring or adding functionality.
  • Verify 'corner' cases that may not be tested within the System Test phase.
  • Demonstrate a level of quality to the client.
  • Ensure that no other developer has undermined the quality of the code.

Limitation

Software testing is a combinatorial problem. For example, every boolean decision statement requires at least two tests: one with an outcome of "true" and one with an outcome of "false". As a result, for every line of code written, programmers often need 3 to 5 lines of test code.(3) This obviously takes time and its investment may not be worth the effort. There are also many problems that cannot easily be tested. For example, those that are nondeterministic or involve multiple threads. In addition, writing code for a unit test is as likely as the source code to be at least as buggy as the code it is testing.

Best Practices

(4) (5)

Testing Trivial Code

Unit tests should test those segments of code that are likely to have defects when first developed, or are likely to have defects introduced when changed. Like all software development activities, there is a cost benefit analysis that can be applied to writing unit tests. For normal enterprise business software, it is not worthwhile to test trivial code.

Typical examples of trivial code in Java include simple getter and setter methods for properties and simple constructors.

class User {
  private String name;

  public String getName() {
    return name;
  }

  public void setName(String newName) {
    name = newName;
  }
}

class UserTest extends TestSuite {
  public void testNameProperty() {
    User user = new User();
    assertNull(user.getName());

    String testName = "test";
    user.setName(testName);
    assertEquals(testName, user.getName());
  }
}

Using hardcoded values when checking test results

Some tests have a hardcoded value when checking the results of some operation. This value is often separately hardcoded in the application code being tested. When the value is changed in the application code, the test is guaranteed to fail. (4)

class CustomerWebController {
  public String doOperationReturningNextPage(UserInput input) {
    // some random logic...
    return "newCustomer.jsp";
  }
}

class CustomerWebControllerTest extends TestSuite {
  public void testDoOperation() {
    CustomerWebController controller = new CustomerWebController();
    UserInput input = getInputForNewCustomer();
    String result = controller.doOperationReturningNextPage(input);
    assertEquals("newCustomer.jsp", result);
  }
}

The DRY Principle (Don't Repeat Yourself)

The fix for this is simply the application of the DRY principle: Don't Repeat Yourself (from the book The Pragmatic Programmer: From Journeyman to Master by Andrew Hunt and David Thomas). When you go to use a magic value more than once, define it as a constant (or method) and then refer to that constant (or method). To improve the example, we define a constant for the "newCustomer.jsp" value.


class CustomerWebController {
  public static final NEW_CUSTOMER_PAGE = "newCustomer.jsp";
    public String doOperationReturningNextPage(UserInput input) {
      // some random logic...
      return NEW_CUSTOMER_PAGE;
    }
}


class CustomerWebControllerTest extends TestSuite {
  public void testDoOperation() {
    CustomerWebController controller = new CustomerWebController();
    UserInput input = getInputForNewCustomer();
    String result = controller.doOperationReturningNextPage(input);
    assertEquals(CustomerWebController.NEW_CUSTOMER_PAGE, result);
  }
}

Being too dependent on specific test data

There is some debate on whether or not unit tests should involve the database. In practice, this is actually quite common and serves a useful purpose. However, problems often occur because such tests are overly dependent on specific data in the database. As new tests are written and new test data is added, this can cause existing tests to fail regardless of if they are related.

class CustomerDataAccess {
  public List findCustomers(CustomerCriteria criteria) {
    // Logic to query database based on criteria
    return customersFound;
  }

class CustomerDataAccessTest extends TestSuite {
  public void testFindCustomers {
    CustomerDataAccess customerDataAccess = new CustomerDataAccess();
    CustomerCriteria criteria = new CustomerCriteria();
    String firstNameToFind = "Bob";
    criteria.firstNameEquals(firstNameToFind);
    List results = customerDataAcess.findCustomers(criteria);
    assertEquals(2, results.size());
  }
}

The following example code has a problem in that the test expects there to be only two customers with a first name of 'Bob', which must have been the case when the test was first written and executed. However, a developer could add another customer named 'Bob' at any point in time while unaware of the existing code. This could suddenly cause this test to fail. The general fix for situations such as this it to minimize the number of assumptions you make about the test data. This reduces the level of coupling between the test and the data, which makes either easier to change independently of the other.

To improve this particular example, we simply need to change the test to check that each result matches the criteria we specified.

class CustomerDataAccessTest extends TestSuite {
  public void testFindCustomers {
    CustomerDataAccess customerDataAccess = new CustomerDataAccess();
    CustomerCriteria criteria = new CustomerCriteria();
    String firstNameToFind = "Bob";
    criteria.firstNameEquals(firstNameToFind);
    List results = customerDataAcess.findCustomers(criteria);
    for (Customer customer : results) {
      assertEquals(firstNameToFind, customer.getFirstName());
    }
  }
}

Code Coverage

This is an analysis method that determines which parts of the software have been executed (covered) by the test case suite. It also determines which parts have not been executed and may require additional attention.

http://en.wikipedia.org/wiki/Code_coverage

Some Additional Best Practices

  • Keep each test independent of others.
    • Do not make unnecessary assumptions.
      • It is counterproductive to assert anything that is already asserted by another test.
      • Avoids increasing the frequency of failures related to the same root cause.
      • Only apply asserts that apply to the functionality being tested.
      • Follow a rule of one assertion per test.
    • Test one set of code at a time.
    • Avoid unnecessary preconditions.
      • Only run preliminary code that is related to the test to be run.
      • Use common preliminary code only when the associated tests actually require it.
  • Keep configuration settings separate from the unit tests.
  • Use clear and consistent naming for all unit tests.

(17)

Effective Use of Assertions

  • Make one logical assertion per test.
  • Each assert should be made in relation to the functionality being tested.
  • Avoid making any assertions that are already covered by another existing assertion.

Writing Effective Unit Tests

  • Scope is crucial.
    • Narrow scope may result in trivial test results that provide no real value.
    • Broad scope may test too much that it becomes difficult to determine root cause of failures.
  • Group tests according to a major feature/functionality.
    • Include enough tests to cover that specific feature/functionality.
  • Define unit tests at the method level.
    • Some methods will involve groups of objects. In this case, testing should isolate the groups of objects for testing.
      • Identifies segments of related code.
  • Read the code and check if it catches an error or throws an error.
    • Easy identification of a method with potential to break.
    • Unit tests should cover error scenarios.
  • Total unit tests should cover the functional requirements of the code at a minimum.

(8)(18)

Brief Description of Some Available Unit Testing Tools

(6)(9)

  • GrandTestAuto (19)
    • Enables completely automated testing of Java software.
    • More advanced than JUnit.
    • Simple to use.
    • Supports distribution of tests across a network.
    • Integrates with additional tools.
  • JUnit (7)(20)
    • Enables automated testing of Java software.
    • Simple to use.
    • Makes it easy to identify test failures.
    • Open source tool.
  • NUnit (21)
    • Provides a unit testing framework for .NET software.
    • Takes advantage of existing .NET language features.

Functional Test

The purpose of functional test is to measure the behavioral quality of a software application. Functional tests verify that the system responds appropriately from the user perspective and functions according to the design requirements used to specify the application. The functional test must determine if each component of a software application performs in accordance to the specifications, responds correctly to all conditions that may be presented by incoming events, processes incoming events correctly from one business event to the other, and produces an appropriate outcome from incoming events.

Advantages

Functional has several benefits in order to develop a robust software application. Here are the advantages:

  • It verifies that an application works as per specifications across multiple platforms, browsers and technologies.
  • Make sure that a certain feature is completed from a user's point of view.
  • A tester needs no knowledge of implementation, including specific programming languages in order to execute the test.
  • Tester and programmer are independent from each other
  • It will help to expose any ambiguities or inconsistencies in the specification
  • Test cases can be designed as soon as the specification is completed.

Best Practices

Functional tests are the measurement for an application's code completion. It makes sure that all the feature and functionalities are working as expected from an end user perspective. Therefore, it is important to design functional tests that will expose any vulnerabilities and assure confidence in an application. Before writing a test, it is very important to understand how the software is supposed to behave. Functional specification documents will have all the details about the behavior of the application. The first thing to do before writing any test cases is to write the functional specification document. Once the specification is read, it is important to organize tests to make sure that they cover each and every feature and functionality of the application. This will make sure that all the components are thoroughly tested and enable identify any gaps in testing. Test cases should be written prior to developing functionality. This will lead to identification of preconditions and establish expected behavior. Additionally, this will help to automate the functional tests and enables another person to run the test regardless of their knowledge of the expected behavior. The test cases should also cover very rare situations. Software developers tend to develop applications strictly according to its specification. They may not appropriately code the program to work for rare cases. A majority of the bugs are found when applications are used in rare input conditions. Therefore, it is important to make sure that functional tests are created to test boundary conditions, error cases, and rare input cases.

Ensuring Effectiveness and Efficiency Through the Functional Testing Lifecycle

  • Requirements Gathering
    • Define clear and complete requirements that can be tested.
  • Test Planning
    • Establish guidelines and standards for creating tests.
    • Identify the required hardware and software for the testing environment.
  • Test Strategizing
    • Utilize resources to achieve optimum test coverage.
  • Test Execution
    • Define an efficient test execution flow.
  • Collect Testing Metrics
    • Gauge the test progress and quality of the testing.
    • Make continuous improvements as needed.

(22)

Writing functional tests

An application may have several features and each of those features have several functionalities of their own. With a vast number of overall functionalities, there is a possibility that a functionality may be overlooked. Therefore, it is a good practice to organize test cases by their features and functionalities. This can help ensure that no feature or functionality is missed.

The following is a sample organization structure of functional test cases:

<Feature 1>
  <Function 1>
    <Test 1>
    <Test 1/>
    <Test 2>
    <Test 2/>
    …
  <Function 1/>
<Feature 1/>
<Feature 2>
  <Function 1>
    <Test 1>
    <Test 1/>
    …
  <Function 1/>
  …
<Feature 2/>

Test cases can be executed by anybody if they are written in a meaningful way for anyone to understand. In order to create a meaningful test case, a test case should have specific components. First, it should have a purpose statement on what it is intended to test. Second, the test case should have a setup component that is run prior to executing the test. Third, it should have steps to execute the test. Fourth, it should have expected behavior to compare against for the determination of if the test passed or failed. Finally, it should have a cleanup procedure to make sure that the system is reverting back to the original state.

Here is a sample of a test case:

<Test 1>
  <Purpose>
    “Purpose of the test case”
  <Purpose/>
  <Setup>
    “Steps to prepare the system prior to executing the test case”
  <Setup/>
  <Execution>
    “Steps required executing the test”
  <Execution/>
  <Expected Behavior>
    “Expected behavior of the test case”
  <Expected Behavior/>
  <Cleanup>
    “Steps required to cleaning up the environment to revert back”
  <Cleanup/>
<Test 1/>

Writing functional test cases for 100% coverage can be a seemingly impossible task that would require a tester to continue writing functional test forever. Even when there are several functionalities requiring testing, there can be situations in which the functional testing of major functionalities and features must be completed in a short amount of time. This type of situation can be handled by assigning priorities to each functional test. Higher priority tests should cover key features. Lower priority tests should cover less important features.

Functionality test tools

Functional tests can become very large in size and that can make them difficult to maintain and time-consuming to execute manually. Therefore, it is a good practice to automate the test cases. Automating tests can reduce the amount of time between notifications of problem introductions and having such problems addressed. (1) There are several tools available that automate functional testing. Here are some lists of some of the tools available:

Quick Test Professional

The Quick Test Professional tool enables the automation of functional tests through an user interface. It detects objects in the user interface and performs desired operations simulating a mouse click or keyboard events. The tool can also be used to automate functional tests on graphical and non-graphical user interfaces. It allows collaboration through the storage of similar object definitions in a single repository manager, which can be shared among other testers. The same repository can maintain automated and manual test cases.

JFunc

JFunc is an open-source functional testing tool that helps to automate functional test cases. JFunc is an extension of the JUnit test framework. It serves to ease the creation of manual functional test suites. JFunc can handle multiple test failures by allowing the functional test to not exit after the first failure. Verbose assertions have more detail about the test failures and error messages. New arguments can be passed into a test each time it runs.

Waitir

Waitir is a web application testing tool for the Ruby language. Waitir tests the web application on all web browsers on different platforms. It distinguishes from other web application functional testing tools by executing tests at the web browser layer and by driving a web browser and the interacting with objects on a web page. This is more accurate than manual functional testing. Here is a sample test case that validates the search operation:

#------------------------------------------------------------------------------------------------------------------#
# Purpose: To Validate the search capability of the browser.
#------------------------------------------------------------------------------------------------------------------#
#------------------------------------------------------------------------------------------------------------------#
# Setup:
#------------------------------------------------------------------------------------------------------------------#
  require "watir"
  test_site = "http://www.google.com"
  browser = Watir::Browser.new
  browser.goto test_site
  browser.text_field(:name, "q").set "pickaxe"
#------------------------------------------------------------------------------------------------------------------#
# Execution:
#------------------------------------------------------------------------------------------------------------------#
  browser.button(:name, "btnG").click # "btnG" is the name of the Search button
#-------------------------------------------------------------#
# Expected Behavior: “A Google page with results should be shown. 'Programming Ruby' should be high on the list."
#------------------------------------------------------------------------------------------------------------------#
  if browser.text.include? "Programming Ruby"  
    puts "  Test Passed. Found the test string: 'Programming Ruby'. Actual Results match Expected Results."
  else
    puts "  Test Failed! Could not find: 'Programming Ruby'." 
  End
#------------------------------------------------------------------------------------------------------------------#
#Cleanup: 
#------------------------------------------------------------------------------------------------------------------#
  browser.close

Additional Functional Testing Tools

  • Arbiter (24)
    • Document-based acceptance testing.
    • Driven by requirements documents in Microsoft Word or RTF file formats.
    • Requirements are used to establish a glossary and test suite.
  • Blerby Test Runner (25)
    • Ajax test runner for php software.
    • Allows for instant feedback while performing on-the-fly code refactoring.
    • Tracks test dependencies.
    • Automatically re-runs affected tests when corresponding code changes.
  • Concordion (26)
    • Open source framework for testing Java software.
    • Allows for plain English descriptions of requirements to be converted into automated tests.
    • Specifications are linked to the software system itself and prevents tests from becoming outdated.
    • Notifications inform when a change in system behavior causes associated test to fail.
  • Eclipse Jubula (27)
    • Provides automated UI functional testing for HTML and Java applications.
    • Aimed at creating tests from the user perspective.
    • Limited coding efforts required.
  • Robotium (28)
    • Test framework to write black-box and white-box tests for Android applications.
    • Requires test case suite to be installed on the same device or simulator as the application.
    • Access the application to execute tests scenarios in a real environment.

(23)

Integration Test

The purpose of integration testing is to test the combination of individual components working together. This type of testing can expose faults that result from the interaction between the integrated components. (13)

Various Approaches to Integration Testing

  • Big Bang - All or most of the individual components are combined and tested at one time. This approach is typically utilized when a test team receives an entire software bundle.
  • Top Down - Top level components are tested initially. Lower level components are tested subsequently in a step by step method. This approach is used whenever development is taking the same type of approach. In this approach, test stubs are needed to simulate lower level components that are not yet available.
  • Bottom Up - Bottom level components are tested initially. This approach is used whenever development is taking the same type of approach. In this approach, test stubs are needed to simulate higher level components that are not yet available.
  • Sandwich/Hybrid - This approach is a combination of the Top Down and Bottom up approaches.

Advantages to Each Approach

  • Big Bang
    • Convenient for smaller systems.
    • Quick.
    • Cheap.
    • No stubs or stand-in objects are needed.
  • Top Down
    • Potential for early identification of major flaws near the top of the product.
    • Critical components are tested on priority.
    • Easier to isolate root cause of interface errors due to the incremental approach.
  • Bottom Up
    • Potential for early identification of major flaws near the bottom of the product.
    • Easier to create test conditions.
    • No need to wait for all modules to be developed.
    • Each component and unit gets tested for correctness before being integrated.
    • Typically results in a more robust system.
  • Sandwich/Hybrid
    • Useful for larger projects with several subprojects.

(15)

Best Practices

  • Create only the integration tests needed. These tests offer great value at the cost of a great amount of work to properly set up and time to execute. Consider only testing the default scenarios. There should be enough testing to validate that critical or high-severity defects no longer exist.
  • Do not depend on specific data to be available to the test. Always have any necessary data created prior to the execution of a test. Anyone with proper access could delete or modify test data and break a test, which is another reason it should not be assumed to be available.
  • Use multiple asserts for each test. Due to the time consumption of integration tests, it is recommended that tests be consolidated. In this type of testing, it is considered better to ignore following a one assert per test rule.
  • When seeking to validate the same functionality and there is an option between creating a unit test or an integration test, always choose the unit test. Unit tests will run faster and be easier to set up.

(14)

Available Integration Testing Tools

  • eggPlant (30)
    • Focused on testing the user experience. (29)
    • Black-box test automation tool.
    • Low overhead.
    • Non-invasive. Not required to run on the same system under test.
    • Image capture and search techniques prevent most UI changes from affecting existing tests.
  • Ruby Capybara (31)
    • Designed for Ruby on Rails applications.
    • Tests rack-based web applications.
    • Simulates how a user interacts with a website.
  • Selenium (33)
    • Provides automated integration testing for Java web applications. (32)
    • Provides automation-aided exploratory testing.
    • Distributes testing across many environments.
    • Supports testing on many types of web browsers.

Performance Test

The purpose of performance testing is to determine speed and effectiveness. Various quantitative and qualitative attributes may be used. Examples of quantitative attributes include response time and number of MIPS (millions of instructions per second). Examples of qualitative attributes include scalability, interoperability, and reliability. (10)

Performance testing can be used to determine the speed of a specific aspect of a system with a specific workload. This allows for the identification of poor performance areas and for the establishment of acceptable thresholds required to maintain acceptable response time.

There are several types of performance testing. Understanding the various types helps to minimize cost, reduce risk, and to know when it is appropriate to apply which type of test on a project.

Key Types of Performance Testing & Their Advantages

  • Performance test - Determines speed, stability, and/or scalability.
    • Focuses on determining user satisfaction with regards to performance.
    • Identifies differences between the expectations and reality of existing performance.
    • Supports optimization and capacity planning.
  • Load test - Verifies application behavior under both normal and peak load conditions.
    • The hardware environment is evaluated for adequacy.
    • Detects concurrency issues.
    • Detects functional errors that occur under load.
    • Supports determination of maximum simultaneous users prior to performance being compromised.
    • Supports determination of maximum load before limits of resource utilization are exceeded.
  • Stress test - Determines behavior when conditions exceed normal or peak load conditions.
    • Identifies if over-stressing the system can corrupt data.
    • Supports establishment of application-monitoring triggers that can warn of impending failures.
    • Determines side effects of failures related to hardware or applications.
    • Identifies the kinds of failures to plan for.
  • Capacity test - Determines the amount of users and/or transactions that can be supported while continuing to meet performance goals.
    • Provides capacity data that can be used to validate or enhance models.
    • Determines current usage and capacity of the system.
    • Provides data on capacity and usage trends of the system.

(11)

Best Practices

  • Test the code with the same granularity as used for unit tests.
  • Do not perform lots of assertions on the test results.
  • Test enough to measure statistically significant performance differences.
  • Ideal performance tests should run relatively fast.
  • Test setup should be performed independent of the actual test method.

(12)

Available Performance Testing Tools

  • AgileLoad
    • Provides load testing for mobile and web applications.
    • Designed to fit into the Agile Development methodology.
    • Windows-based.
    • Tests applications designed for use on cloud or internal networks.
  • Forecast
    • Tests IT systems for performance, reliability, and scalability.
    • Realistically simulates multiple thousands of unique users simultaneously accessing functionalities.
    • Avoids the overhead and expense associated with hardware costs.
  • HP LoadRunner
    • Detects bottlenecks.
    • Emulates production workloads.
    • Diagnoses root cause of performance issues.
    • Improves performance prior to application deployment.
  • IBM Rational Performance Tester
    • Identifies both the presence and cause of performance bottlenecks.
    • Provides problem identification and problem diagnosis.
    • Root Cause Analysis features allows for identifying source code causing performance issues.
    • Real-time reports are viewable via web browser.
  • RTI
    • Measures application performance based on response times for transactions with poor performance.
    • Dynamically collects performance data and diagnoses problems throughout system aspects.
    • Quantifies and validates application architectures.
  • SilkPerformer
    • Provides performance and load testing for software applications.
    • Automates software load and stress.
    • Enables identification of bottlenecks.
    • Diagnostic tools may be applied to resolve performance issues.

(34)

References

[1] http://dmcnulla.wordpress.com/2012/01/28/good-practices-for-automating-functional-tests/

[2] http://www.javaworld.com/javaworld/jw-03-2009/jw-03-good-unit-tests-1.html

[3] http://en.wikipedia.org/wiki/Unit_testing

[4] http://wiki.developerforce.com/index.php/How_to_Write_Good_Unit_Tests

[5] http://msdn.microsoft.com/en-us/magazine/cc163665.aspx

[6] http://www.tejasconsulting.com/open-testware/feature/unit-test-tool-survey.html

[7] http://www.softwaresummit.com/2004/speakers/SteltingTestingJ2EE.pdf

[8] http://users.csc.calpoly.edu/~cstaley/General/TestingHowTo.htm

[9] http://www.aptest.com/resources.html

[10] http://performance-testing.org/performance-testing-definitions

[11] http://msdn.microsoft.com/en-us/library/bb924357.aspx

[12] http://www.mantidproject.org/Writing_Performance_Tests#Best_Practice_Advice

[13] http://softwaretestingfundamentals.com/integration-testing/

[14] http://msdn.microsoft.com/en-us/library/vstudio/hh323698(v=vs.100).aspx#erg

[15] http://www.itdivisioninc.com/IntegrationTesting.aspx

[16] http://wiki.expertiza.ncsu.edu/index.php/CSC/ECE_517_Fall_2010/ch1_2e_RI

[17] http://blog.stevensanderson.com/2009/08/24/writing-great-unit-tests-best-and-worst-practises/

[18] http://ubiquity.acm.org/article.cfm?id=358976

[19] http://grandtestauto.org/

[20] http://www.javapractices.com/topic/TopicAction.do?Id=33

[21] http://nunit.org/

[22] http://www.infosys.com/engineering-services/white-papers/Documents/functional-testing.pdf

[23] http://www.opensourcetesting.org/functional.php

[24] http://arbiter.sourceforge.net/

[25] https://github.com/tmpvar/Blerby-Test-Runner

[26] http://www.concordion.org/

[27] http://www.eclipse.org/jubula/

[28] http://code.google.com/p/robotium/

[29] http://www.testplant.com/blog/2011/06/09/integration-testing/

[30] http://www.testplant.com/products/eggplant/

[31] https://www.ruby-toolbox.com/projects/capybara

[32] http://www.developer.com/java/web/article.php/3872691/Selenium-Automated-Integration-Testing-for-Java-Web-Apps.htm

[33] http://seleniumhq.org/

[34] http://www.toolsjournal.com/tools-world/item/156-top-performance-testing-tools