CSC/ECE 517 Spring 2023 - G2334 Develop and refactor test suite for GraphQL Query

From Expertiza_Wiki
Jump to navigation Jump to search

Team

Mentor

  • Jialin Cui

Team Members

  • Aman Waoo (awaoo)
  • Girish Wangikar (gwangik)
  • Pranavi Sharma Sanganabhatla (psangan)

Problem Statement

The current Python project makes API calls to the GitHub GraphQL API using the requests package. We must rewrite the current test cases such that mocking libraries are used in place of actual API calls, and we must also add new test cases to boost the test coverage.

Need of API Mocking :

API mocking empowers developers to exercise complete control over the behavior of the mocked API, encompassing responses, errors, and delays. This level of control facilitates precise testing scenarios, such as evaluating error handling, edge cases, or desired responses from the API. Furthermore, it enables the replication of specific scenarios during testing, simplifying the identification and resolution of issues.

Project Description

Feature explanation - GitHub GraphQL API queries

GraphQL is a modern and flexible API query language that provides a more efficient and developer-friendly way to request and manipulate data from APIs compared to traditional REST APIs. This project makes use of the requests library to send requests to the GitHub GraphQL API to access user public data. We have used the pytest testing framework and the pytest-mock plugin to create tests for methods that submit requests to an API using the requests library.

Tasks and Goals to be accomplished :

We are tasked with increasing the test coverage by introducing new test cases, as well as restructuring the current test cases to use mocking libraries rather than actual API calls. The major goal is to implement the refactored existing code in the files already available in a newly cloned repository.

  • Test functions should have names that are both descriptive and explicit so that it is evident what part of the system they are testing.
  • Several tests in the current codebase make actual API calls as opposed to using mock calls. Make mock test calls from these tests.
  • Change the requirements.txt file to reflect the newly installed packages (pytest-cov and pytest-mock).
  • Update the env file for the github token.
  • Get the code fully tested.

Files Involved

  • test_suit.py
  • requirements.txt
  • env file

UML Diagram

This image illustrates the concept of mocking GitHub's GraphQL API using a GraphQL API mocking tool. (Please click on the image for a bigger view)

Work Strategy

To achieve the objectives, we conducted the following steps :

1. Use the command below to install the packages pytest-cov and pytest-mock:

Code Snippet :

   pip install pytest-cov pytest-mock

And made the following changes in the requirements.txt file to incorporate the required modules for the project :

   pip install pytest-cov pytest-mock
   pip freeze > requirements.txt
   pip install pytest-cov pytest-mock

This will set up the packages your project needs for code coverage and mocking.

2. All test cases that use function B, which makes a genuine API call, have been refactored so that mocking is used in place of calling. Create a mock version of function B using the mocker.patch function offered by pytest-mock, and set the mock function's return value to the expected API response. Please refer the below code snippet for a better understanding of the statment.

Code Snippet :

  def function_b(): 
   response = requests.get('https://api.example.com/data') 
   return response.json() 
  def function_a(): 
   data = function_b() 
   # do something with the data 
   return data 
  def test_function_a(mocker): 
   mock_function_b = mocker.patch('_main_.function_b') 
   mock_function_b.return_value = {'foo': 'bar'} 
   result = function_a() 
   assert result == {'foo': 'bar'} 
   mock_function_b.assert_called_once()

We have implemented API mocking in a large number of methods and functions. Here are some code samples of the functions that used direct API calls and were adjusted by employing API mocking, as it is very difficult for us to list all the methods we have implemented:

Code snippet :

3. Changes in the env file :

Updating the env file is essential for providing the required environment variables and configurations for successful test execution. This includes sensitive information like API keys, access tokens, or other values that need to be set as environment variables in the env file. Additionally, the env file may contain other necessary configuration settings, such as base URLs or timeouts, specific to the testing environment. By updating the env file, you ensure that the tests can access the necessary configurations during runtime, resulting in accurate test results.

In our project, we have configured the GitHub access token. Including the GitHub token in the env file involves adding a configuration entry that specifies the token value as the corresponding environment variable. This entry typically follows a key-value format, where the key represents the name of the environment variable and the value represents the GitHub token.

Test Plan

The pytest-cov package has been used to generate a coverage report and to pinpoint sections of the code that are not being covered by the current test cases. Next, in order to better the overall test coverage and include these areas, we introduced more test cases.

  • Generating a coverage report: The pytest-cov package has been used to generate a coverage report, which provides information about the percentage of code that is covered by the current test cases. This report can help identify sections of the code that are not being exercised during testing, indicating potential gaps in test coverage.
  • Identifying uncovered code sections: The coverage report generated by pytest-cov has been used to pinpoint sections of the code that are not being covered by the existing test cases. These uncovered code sections are areas of the codebase that are not executed during the current test suite, and may represent potential vulnerabilities or untested functionality.
  • Introducing more test cases: In order to improve the overall test coverage and include these uncovered code sections, additional test cases have been introduced. These new test cases are designed to specifically target the areas of the code that were identified as lacking coverage in the coverage report. By adding more test cases, the goal is to thoroughly exercise the codebase and ensure that all sections of the code are tested, thereby improving the reliability and quality of the testing process.

Test Execution

We divided the work among the teammates and started working on different parts of the project.

This will execute each test case and produce a report on the project's overall coverage. This report helped us identify parts of the code that the current test cases don't cover. We have developed new test cases to cover these regions once you have pointed them out. To imitate the intended behavior of external API calls and to account for both positive and negative eventualities, we used mocking.

Conclusion

  • Refactoring of code: The code in the test_suit.py file has been refactored, which means it has been modified to improve its structure, readability, or performance. This could involve changes such as reorganizing code blocks, simplifying complex logic, removing redundant or obsolete code, or improving variable names.
  • Mocking API calls: In order to obtain the desired output from methods that were originally making API calls, the real API method calls have been replaced with mocked versions. Mocking is a technique used in testing to replace actual dependencies (such as external APIs) with simulated versions that allow controlled and predictable behavior during testing. This allows the tests to be executed in isolation without relying on external services, improving test reliability and performance.
  • Modification of test method names: The names of the test methods in the test_suit.py file have been modified to reflect the functionality being tested by those methods. This could involve updating the method names to accurately describe the purpose or behavior being tested, making it easier to understand the purpose of each test case. Descriptive test method names can also serve as documentation, providing insights into the expected behavior of the code being tested.

Overall, these changes indicate that the test_suit.py file has been updated to improve the structure and reliability of the tests, by mocking API calls and using descriptive test method names to clearly indicate the functionality being tested.