CSC/ECE 517 Spring 2023 - G2334 Develop and refactor test suite for GraphQL Query: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
(Created page with "== Team== === Mentor === * Jialin Cui === Team Members === * Aman Waoo * Girish Wangikar * Pranavi Sharma Sanganabhatla == Description and Testing == Our project deals wit...")
 
 
(34 intermediate revisions by the same user not shown)
Line 6: Line 6:
=== Team Members ===
=== Team Members ===


* Aman Waoo
* Aman Waoo (awaoo)
* Girish Wangikar
* Girish Wangikar (gwangik)
* Pranavi Sharma Sanganabhatla
* Pranavi Sharma Sanganabhatla (psangan)


== Description and Testing ==
== Problem Statement ==
Our project deals with refactoring the test_suit file. The code is responsible for doing the following tasks:
 
The current Python project makes API calls to the GitHub GraphQL API using the requests package. We must rewrite the current test cases such that mocking libraries are used in place of actual API calls, and we must also add new test cases to boost the test coverage.
 
Need of API Mocking :
 
API mocking empowers developers to exercise complete control over the behavior of the mocked API, encompassing responses, errors, and delays. This level of control facilitates precise testing scenarios, such as evaluating error handling, edge cases, or desired responses from the API. Furthermore, it enables the replication of specific scenarios during testing, simplifying the identification and resolution of issues.
 
== Project Description ==
 
=== Feature explanation - GitHub GraphQL API queries ===
 
GraphQL is a modern and flexible API query language that provides a more efficient and developer-friendly way to request and manipulate data from APIs compared to traditional REST APIs. This project makes use of the requests library to send requests to the GitHub GraphQL API to access user public data.  We have used the pytest testing framework and the pytest-mock plugin to create tests for methods that submit requests to an API using the requests library.
 
=== Tasks and Goals to be accomplished : ===
 
We are tasked with increasing the test coverage by introducing new test cases, as well as restructuring the current test cases to use mocking libraries rather than actual API calls. The major goal is to implement the refactored existing code in the files already available in a newly cloned repository.


* Test functions should have names that are both descriptive and explicit so that it is evident what part of the system they are testing.
* Test functions should have names that are both descriptive and explicit so that it is evident what part of the system they are testing.
* Several tests in the current codebase make actual API calls as opposed to using mock calls. Make mock test calls from these tests.
* Several tests in the current codebase make actual API calls as opposed to using mock calls. Make mock test calls from these tests.
*  
* Change the requirements.txt file to reflect the newly installed packages (pytest-cov and pytest-mock).
* Update the env file for the github token.
* Get the code fully tested.
* Get the code fully tested.


=== Files Involved ===
=== Files Involved ===
Line 25: Line 40:
* env file
* env file


=== Running Tests ===
=== UML Diagram ===
To successfully run answer_spec on the local machine, please run the below rspec command.  
This image illustrates the concept of mocking GitHub's GraphQL API using a GraphQL API mocking tool. (Please click on the image for a bigger view)
<pre>
 
  rspec spec/answer_spec.rb
[[File:FinalProjectUmlG2334.png|1200px]]
</pre>
 
== Work Strategy ==
 
To achieve the objectives, we conducted the following steps :


1. Use the command below to install the packages pytest-cov and pytest-mock:


== Project Description ==
Code Snippet :


The objective of the project is to refactor the following Ruby on Rails model classes in the existing project:- Answer Model on Expertiza.The model is designed to represent answers to questions in Expertiza.  So, whenever anyone fills out a rubric, each “question” that is answered creates a new Answer object.
    pip install pytest-cov pytest-mock


The task is to refactor the existing code present in the model file and implement them in a new repository. Since the project is implemented from scratch, there are various aspects of the project that have to be implemented and therefore unit test cases are implemented to test the implementation.
And made the following changes in the requirements.txt file to incorporate the required modules for the project :


    pip install pytest-cov pytest-mock
    pip freeze > requirements.txt
    pip install pytest-cov pytest-mock


* answers_by_question_for_reviewee_in_round:
This will set up the packages your project needs for code coverage and mocking.
  1. This method joins the responses, answers, response_maps, and questions table.
  2. Then it applies a where clause to filter the results based on the given assignment_id, reviewee_id, q_id, and round parameters.
  3. It retrieves answers and comments columns from the combined table.  


2. All test cases that use function B, which makes a genuine API call, have been refactored so that mocking is used in place of calling. Create a mock version of function B using the mocker.patch function offered by pytest-mock, and set the mock function's return value to the expected API response. Please refer the below code snippet for a better understanding of the statment.
Code Snippet:


    scope :by_question_for_reviewee_in_round, -> (assignment_id, reviewee_id, q_id, round) do
Code Snippet :
      joins(response: {map: :reviewer})
 
      .joins(:question)
  def function_b():
      .where("review_maps.reviewed_object_id = ? AND
    response = requests.get('https://api.example.com/data')  
              review_maps.reviewee_id = ? AND
    return response.json()
              answers.question_id = ? AND
  def function_a():  
              responses.round = ?", assignment_id, reviewee_id, q_id, round)
    data = function_b()  
      .select(:answer, :comments)
    # do something with the data
  end
    return data
  def test_function_a(mocker):
    mock_function_b = mocker.patch('_main_.function_b')
    mock_function_b.return_value = {'foo': 'bar'}
    result = function_a()
    assert result == {'foo': 'bar'}
    mock_function_b.assert_called_once()


We have implemented API mocking in a large number of methods and functions. Here is a code sample of a function that used direct API calls and were adjusted by employing API mocking, as it is very difficult for us to list all the methods we have implemented:


Code snippet :


* answers_by_question:
  def test_end_cursor_gists(mocker):
  1. This method joins the questions, answers, responses, and response_maps table.
      contribution_type = 'gists'
  2. Then it applies a where clause to filter the results based on the given q_id and assignment_id.
      init_query = {'user': {contribution_type: {'pageInfo': {'endCursor': 'mock_end_cursor'}}}}
  3. It retrieves the distinct answers and comments columns from the combined table.
      mocker.patch('my_package.gist_issue_project_pr.get_data', return_value=init_query)
      dic = gist_issue_project_pr.user_contribution_type_history_page('JialinC', contribution_type, 'mock_end_cursor')
      assert list(dic['user'].keys())[1] == contribution_type


Code Snippet:
  def test_end_cursor_issues(mocker):
      contribution_type = 'issues'
      init_query = {'user': {contribution_type: {'pageInfo': {'endCursor': 'mock_end_cursor'}}}}
      mocker.patch('my_package.gist_issue_project_pr.get_data', return_value=init_query)
      dic = gist_issue_project_pr.user_contribution_type_history_page('JialinC', contribution_type, 'mock_end_cursor')
      assert list(dic['user'].keys())[1] == contribution_type


    scope :by_question, -> (assignment_id, q_id) do
In the above code,  
        joins(response: {map: :reviewer})
        .joins(:question)
        .where("review_maps.reviewed_object_id = ? AND
              answers.question_id = ?", assignment_id, q_id)
        .select(:answer, :comments)
        .distinct
    end


test_end_cursor_gists: This is a test case that is testing a function called user_contribution_type_history_page with the contribution type set to 'gists'. It uses the mocker fixture to mock the my_package.gist_issue_project_pr.get_data function and return a specific query result (init_query). Then, it calls the user_contribution_type_history_page function with mocked data and asserts that the contribution type in the returned dictionary (dic) matches the expected value.


test_end_cursor_issues: This is another test case that is similar to the previous one, but it tests the user_contribution_type_history_page function with the contribution type set to 'issues'. It also uses the mocker fixture to mock the my_package.gist_issue_project_pr.get_data function and return a specific query result (init_query). Then, it calls the user_contribution_type_history_page function with mocked data and asserts that the contribution type in the returned dictionary (dic) matches the expected value.


* answers_by_question:
3. Changes in the env file :
  1. This method joins the responses, answers, response_maps, and questions table. 
  2. Then it applies a where clause to filter the results based on the given assignment_id, reviewee_id and q_id. 
  3. It retrieves answers and comments columns from the combined table.


Updating the env file is essential for providing the required environment variables and configurations for successful test execution. This includes sensitive information like API keys, access tokens, or other values that need to be set as environment variables in the env file. Additionally, the env file may contain other necessary configuration settings, such as base URLs or timeouts, specific to the testing environment. By updating the env file, you ensure that the tests can access the necessary configurations during runtime, resulting in accurate test results.
Code Snippet:


    scope :by_question, -> (assignment_id, q_id) do
In our project, we have configured the GitHub access token. Including the GitHub token in the env file involves adding a configuration entry that specifies the token value as the corresponding environment variable. This entry typically follows a key-value format, where the key represents the name of the environment variable and the value represents the GitHub token.
        joins(response: {map: :reviewer})
        .joins(:question)
        .where("review_maps.reviewed_object_id = ? AND
              answers.question_id = ?", assignment_id, q_id)
        .select(:answer, :comments)
        .distinct
    end


== Test Plan ==
== Test Plan ==


We have implemented test cases using RSpec for different validation checks for answer.rb.
The pytest-cov package has been used to generate a coverage report and to pinpoint sections of the code that are not being covered by the current test cases. Next, in order to better the overall test coverage and include these areas, we introduced more test cases.
 
* Generating a coverage report: The pytest-cov package has been used to generate a coverage report, which provides information about the percentage of code that is covered by the current test cases. This report can help identify sections of the code that are not being exercised during testing, indicating potential gaps in test coverage.


Code Snippet:
* Identifying uncovered code sections: The coverage report generated by pytest-cov has been used to pinpoint sections of the code that are not being covered by the existing test cases. These uncovered code sections are areas of the codebase that are not executed during the current test suite, and may represent potential vulnerabilities or untested functionality.


  require '/Users/aw/Documents/Course Study Material/OODD/Program 3/spec/rails_helper.rb'
* Introducing more test cases: In order to improve the overall test coverage and include these uncovered code sections, additional test cases have been introduced. These new test cases are designed to specifically target the areas of the code that were identified as lacking coverage in the coverage report. By adding more test cases, the goal is to thoroughly exercise the codebase and ensure that all sections of the code are tested, thereby improving the reliability and quality of the testing process.
  require '/Users/aw/Documents/Course Study Material/OODD/Program 3/spec/spec_helper.rb'


  RSpec.describe Answer, type: :model do
=== Test Execution ===
    describe ".by_question_for_reviewee_in_round" do
      let(:assignment) { create(:assignment) }
      let(:reviewer) { create(:user) }
      let(:reviewee) { create(:user) }
      let(:round) { 1 }
      let(:question) { create(:question, assignment: assignment) }
      let(:response_map) { create(:review_map, reviewed_object: assignment, reviewer: reviewer, reviewee: reviewee) }
      let(:response) { create(:response, map: response_map, round: round) }
      let!(:answer) { create(:answer, question: question, response: response) }
      it "returns the answer and comments for the specified question, reviewee, assignment, and round" do
        expect(Answer.by_question_for_reviewee_in_round(assignment.id, reviewee.id, question.id, round))
          .to match_array([{answer: answer.answer, comments: answer.comments}])
    end
  end


  describe ".by_question" do
We divided the work among the teammates and started working on different parts of the testing coverage. Additional test coverage is yet to be done, so the code snippets and the results will be added as the future work.
    let(:assignment) { create(:assignment) }
    let(:reviewer) { create(:user) }
    let(:question) { create(:question, assignment: assignment) }
    let(:response_map) { create(:review_map, reviewed_object: assignment, reviewer: reviewer) }
    let(:response) { create(:response, map: response_map) }
    let!(:answer) { create(:answer, question: question, response: response) }
    it "returns the answer and comments for the specified question and assignment" do
      expect(Answer.by_question(assignment.id, question.id))
        .to match_array([{answer: answer.answer, comments: answer.comments}])
    end
  end


  describe ".by_question_for_reviewee" do
== Conclusion ==
    let(:assignment) { create(:assignment) }
    let(:reviewer) { create(:user) }
    let(:reviewee) { create(:user) }
    let(:question) { create(:question, assignment: assignment) }
    let(:response_map) { create(:review_map, reviewed_object: assignment, reviewer: reviewer, reviewee: reviewee) }
    let(:response) { create(:response, map: response_map) }
    let!(:answer) { create(:answer, question: question, response: response) }
    it "returns the answer and comments for the specified question, reviewee, and assignment" do
      expect(Answer.by_question_for_reviewee(assignment.id, reviewee.id, question.id))
        .to match_array([{answer: answer.answer, comments: answer.comments}])
    end
  end


  describe ".by_response" do
* Refactoring of code: The code in the test_suit.py file has been refactored, which means it has been modified to improve its structure, readability, or performance. This could involve changes such as reorganizing code blocks, simplifying complex logic, removing redundant or obsolete code, or improving variable names.
    let(:response) { create(:response) }
    let!(:answer) { create(:answer, response: response) }
    it "returns the answer for the specified response" do
      expect(Answer.by_response(response.id)).to eq([answer.answer])
    end
  end
end


=== Test Execution ===
* Mocking API calls: In order to obtain the desired output from methods that were originally making API calls, the real API method calls have been replaced with mocked versions. Mocking is a technique used in testing to replace actual dependencies (such as external APIs) with simulated versions that allow controlled and predictable behavior during testing. This allows the tests to be executed in isolation without relying on external services, improving test reliability and performance.
We divided the work among the teammates and started tackling the problems. We stubbed the data using the factory and mocked the method calls which were being done internally to get the desired output from the methods that were calling other methods internally.


* Modification of test method names: The names of the test methods in the test_suit.py file have been modified to reflect the functionality being tested by those methods. This could involve updating the method names to accurately describe the purpose or behavior being tested, making it easier to understand the purpose of each test case. Descriptive test method names can also serve as documentation, providing insights into the expected behavior of the code being tested.


=== Conclusion ===
Overall, these changes indicate that the test_suit.py file has been updated to improve the structure and reliability of the tests, by mocking API calls and using descriptive test method names to clearly indicate the functionality being tested.
We have refactored the code in the answer.rb file to return answers in each response. We have tested the model fully and attaching the video of the passed test cases in the submission.

Latest revision as of 03:45, 13 April 2023

Team

Mentor

  • Jialin Cui

Team Members

  • Aman Waoo (awaoo)
  • Girish Wangikar (gwangik)
  • Pranavi Sharma Sanganabhatla (psangan)

Problem Statement

The current Python project makes API calls to the GitHub GraphQL API using the requests package. We must rewrite the current test cases such that mocking libraries are used in place of actual API calls, and we must also add new test cases to boost the test coverage.

Need of API Mocking :

API mocking empowers developers to exercise complete control over the behavior of the mocked API, encompassing responses, errors, and delays. This level of control facilitates precise testing scenarios, such as evaluating error handling, edge cases, or desired responses from the API. Furthermore, it enables the replication of specific scenarios during testing, simplifying the identification and resolution of issues.

Project Description

Feature explanation - GitHub GraphQL API queries

GraphQL is a modern and flexible API query language that provides a more efficient and developer-friendly way to request and manipulate data from APIs compared to traditional REST APIs. This project makes use of the requests library to send requests to the GitHub GraphQL API to access user public data. We have used the pytest testing framework and the pytest-mock plugin to create tests for methods that submit requests to an API using the requests library.

Tasks and Goals to be accomplished :

We are tasked with increasing the test coverage by introducing new test cases, as well as restructuring the current test cases to use mocking libraries rather than actual API calls. The major goal is to implement the refactored existing code in the files already available in a newly cloned repository.

  • Test functions should have names that are both descriptive and explicit so that it is evident what part of the system they are testing.
  • Several tests in the current codebase make actual API calls as opposed to using mock calls. Make mock test calls from these tests.
  • Change the requirements.txt file to reflect the newly installed packages (pytest-cov and pytest-mock).
  • Update the env file for the github token.
  • Get the code fully tested.

Files Involved

  • test_suit.py
  • requirements.txt
  • env file

UML Diagram

This image illustrates the concept of mocking GitHub's GraphQL API using a GraphQL API mocking tool. (Please click on the image for a bigger view)

Work Strategy

To achieve the objectives, we conducted the following steps :

1. Use the command below to install the packages pytest-cov and pytest-mock:

Code Snippet :

   pip install pytest-cov pytest-mock

And made the following changes in the requirements.txt file to incorporate the required modules for the project :

   pip install pytest-cov pytest-mock
   pip freeze > requirements.txt
   pip install pytest-cov pytest-mock

This will set up the packages your project needs for code coverage and mocking.

2. All test cases that use function B, which makes a genuine API call, have been refactored so that mocking is used in place of calling. Create a mock version of function B using the mocker.patch function offered by pytest-mock, and set the mock function's return value to the expected API response. Please refer the below code snippet for a better understanding of the statment.

Code Snippet :

  def function_b(): 
   response = requests.get('https://api.example.com/data') 
   return response.json() 
  def function_a(): 
   data = function_b() 
   # do something with the data 
   return data 
  def test_function_a(mocker): 
   mock_function_b = mocker.patch('_main_.function_b') 
   mock_function_b.return_value = {'foo': 'bar'} 
   result = function_a() 
   assert result == {'foo': 'bar'} 
   mock_function_b.assert_called_once()

We have implemented API mocking in a large number of methods and functions. Here is a code sample of a function that used direct API calls and were adjusted by employing API mocking, as it is very difficult for us to list all the methods we have implemented:

Code snippet :

  def test_end_cursor_gists(mocker):
     contribution_type = 'gists'
     init_query = {'user': {contribution_type: {'pageInfo': {'endCursor': 'mock_end_cursor'}}}}
     mocker.patch('my_package.gist_issue_project_pr.get_data', return_value=init_query)
     dic = gist_issue_project_pr.user_contribution_type_history_page('JialinC', contribution_type, 'mock_end_cursor')
     assert list(dic['user'].keys())[1] == contribution_type
  def test_end_cursor_issues(mocker):
     contribution_type = 'issues'
     init_query = {'user': {contribution_type: {'pageInfo': {'endCursor': 'mock_end_cursor'}}}}
     mocker.patch('my_package.gist_issue_project_pr.get_data', return_value=init_query)
     dic = gist_issue_project_pr.user_contribution_type_history_page('JialinC', contribution_type, 'mock_end_cursor')
     assert list(dic['user'].keys())[1] == contribution_type

In the above code,

test_end_cursor_gists: This is a test case that is testing a function called user_contribution_type_history_page with the contribution type set to 'gists'. It uses the mocker fixture to mock the my_package.gist_issue_project_pr.get_data function and return a specific query result (init_query). Then, it calls the user_contribution_type_history_page function with mocked data and asserts that the contribution type in the returned dictionary (dic) matches the expected value.

test_end_cursor_issues: This is another test case that is similar to the previous one, but it tests the user_contribution_type_history_page function with the contribution type set to 'issues'. It also uses the mocker fixture to mock the my_package.gist_issue_project_pr.get_data function and return a specific query result (init_query). Then, it calls the user_contribution_type_history_page function with mocked data and asserts that the contribution type in the returned dictionary (dic) matches the expected value.

3. Changes in the env file :

Updating the env file is essential for providing the required environment variables and configurations for successful test execution. This includes sensitive information like API keys, access tokens, or other values that need to be set as environment variables in the env file. Additionally, the env file may contain other necessary configuration settings, such as base URLs or timeouts, specific to the testing environment. By updating the env file, you ensure that the tests can access the necessary configurations during runtime, resulting in accurate test results.

In our project, we have configured the GitHub access token. Including the GitHub token in the env file involves adding a configuration entry that specifies the token value as the corresponding environment variable. This entry typically follows a key-value format, where the key represents the name of the environment variable and the value represents the GitHub token.

Test Plan

The pytest-cov package has been used to generate a coverage report and to pinpoint sections of the code that are not being covered by the current test cases. Next, in order to better the overall test coverage and include these areas, we introduced more test cases.

  • Generating a coverage report: The pytest-cov package has been used to generate a coverage report, which provides information about the percentage of code that is covered by the current test cases. This report can help identify sections of the code that are not being exercised during testing, indicating potential gaps in test coverage.
  • Identifying uncovered code sections: The coverage report generated by pytest-cov has been used to pinpoint sections of the code that are not being covered by the existing test cases. These uncovered code sections are areas of the codebase that are not executed during the current test suite, and may represent potential vulnerabilities or untested functionality.
  • Introducing more test cases: In order to improve the overall test coverage and include these uncovered code sections, additional test cases have been introduced. These new test cases are designed to specifically target the areas of the code that were identified as lacking coverage in the coverage report. By adding more test cases, the goal is to thoroughly exercise the codebase and ensure that all sections of the code are tested, thereby improving the reliability and quality of the testing process.

Test Execution

We divided the work among the teammates and started working on different parts of the testing coverage. Additional test coverage is yet to be done, so the code snippets and the results will be added as the future work.

Conclusion

  • Refactoring of code: The code in the test_suit.py file has been refactored, which means it has been modified to improve its structure, readability, or performance. This could involve changes such as reorganizing code blocks, simplifying complex logic, removing redundant or obsolete code, or improving variable names.
  • Mocking API calls: In order to obtain the desired output from methods that were originally making API calls, the real API method calls have been replaced with mocked versions. Mocking is a technique used in testing to replace actual dependencies (such as external APIs) with simulated versions that allow controlled and predictable behavior during testing. This allows the tests to be executed in isolation without relying on external services, improving test reliability and performance.
  • Modification of test method names: The names of the test methods in the test_suit.py file have been modified to reflect the functionality being tested by those methods. This could involve updating the method names to accurately describe the purpose or behavior being tested, making it easier to understand the purpose of each test case. Descriptive test method names can also serve as documentation, providing insights into the expected behavior of the code being tested.

Overall, these changes indicate that the test_suit.py file has been updated to improve the structure and reliability of the tests, by mocking API calls and using descriptive test method names to clearly indicate the functionality being tested.