CSC/ECE 517 Spring 2024 - E2405 Refactor review mapping helper.rb: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
 
(8 intermediate revisions by 2 users not shown)
Line 80: Line 80:
- The <code>`review_metrics`</code> method now has a more linear flow. It initializes the metrics using <code>`initialize_metrics`</code>, checks if the team data is available using <code>`team_data_available?`</code>, and updates the metrics using <code>`update_metrics`</code> if the data is available. If the data is not available, the method returns early without updating the metrics.<br/>
- The <code>`review_metrics`</code> method now has a more linear flow. It initializes the metrics using <code>`initialize_metrics`</code>, checks if the team data is available using <code>`team_data_available?`</code>, and updates the metrics using <code>`update_metrics`</code> if the data is available. If the data is not available, the method returns early without updating the metrics.<br/>


== Test Plan ==
=== Phase 2 ===
We intend to expand Project 4's test coverage by introducing more tests.
 
==== Refactor the <code>`check_submission_state`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/c53a1b0db0c959f02d5fdf56d7f8490f73598333) ====
 
The main changes made to reduce the cyclomatic complexity of the <code>`check_submission_state`</code> method are:
 
- The if-else conditional statements in the original code have been replaced with a case statement in the refactored code. The case statement uses the return value of the <code>`submission_status``</code> method to determine the appropriate action. </br>
 
- The logic for determining the submission status has been extracted into a separate method called <code>`submission_status`</code>. This method encapsulates the logic for checking if a submission was made within the round, if a link was submitted, and if the link format is invalid. </br>
 
- The code for retrieving the submitted hyperlink has been moved into a separate method called <code>`submission_link`</code>. This method is called when needed instead of being inlined in the check_submission_state method. </br>


Throughout our Test-Driven Development (TDD) efforts, our group would systematically use the painstakingly constructed test skeletons of Vyshnavi Adusumelli and Mustafa Olmez. These painstakingly crafted skeletons will function as priceless blueprints, outlining the fundamental tests required for our unit. We thoroughly test our controller module by including these painstakingly created test skeletons into our workflow. With this approach, we would be able to investigate the workings of our controller in great detail, making sure that its actions are carefully reviewed and verified. Adding these test skeletons will improve our codebase's general quality and reliability while also providing a solid foundation for our unit tests.
- The condition for checking the link format validity has been moved into a separate method called <code>`invalid_link_format?`</code>. This method is called within the <code>`submission_status`</code> method to determine if the link format is invalid. </br>


'''Test Plan'''
==== Refactor the <code>`sort_reviewer_by_review_volume_desc`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/2c853789b1dbf20c97907effbd9ac7593da1c4a7) ====


1. Test <code>`create_report_table_header`</code> method:<br/>
In the refactored code, several changes were made to reduce the cognitive complexity of the sort_reviewer_by_review_volume_desc method: </br>
  - Test case: when headers are provided<br/>
  - Test case: when no headers are provided<br/>


2. Test <code>`review_report_data`</code> method:<br/>
The method has been split into smaller, more focused methods: </br>
  - Test case: when valid parameters are provided<br/>
    - Test scenario 1: when there are response maps for the given reviewed object, reviewer, and type<br/>
    - Test scenario 2: when there are no response maps for the given reviewed object, reviewer, and type<br/>
  - Test case: when invalid parameters are provided<br/>
    - Test scenario 1: when the reviewed object does not exist<br/>
    - Test scenario 2: when the reviewer does not exist<br/>
    - Test scenario 3: when the type is invalid<br/>


3. Test <code>`calculate_response_counts`</code> method:<br/>
- <code>`calculate_review_volumes`</code>: Calculates the review volumes for each reviewer. </br>
  - Test case: when given an empty response_maps array<br/>
  - Test case: when given response_maps with no responses for any round<br/>
  - Test case: when given response_maps with responses for all rounds<br/>
  - Test case: when given response_maps with responses for some rounds<br/>


4. Test <code>`get_team_color`</code> method:<br/>
- <code>`calculate_overall_averages`</code>: Calculates the overall average review volume across all reviewers. </br>
  - Test case: when a response exists for the response map<br/>
  - Test case: when no response exists for the response map<br/>
  - Test case: calls obtain_team_color method with the correct arguments<br/>


5. Test <code>`obtain_team_color`</code> method:<br/>
- <code>`calculate_round_averages`</code>: Calculates the average review volume for each round across all reviewers. </br>
  - Test case: when there is only one round of review<br/>
  - Test case: when there are multiple rounds of review<br/>
  - Test case: when there are no rounds of review<br/>


6. Test <code>`check_submission_state`</code> method:<br/>
- <code>`sort_reviewers_by_overall_average`</code>: Sorts the reviewers in descending order based on their overall average review volume. </br>
  - Test case: when the submission is within the round<br/>
  - Test case: when the submission is not within the round<br/>
    - Test case: when the link is not provided or does not start with 'https://wiki'<br/>
    - Test case: when the link is provided and starts with 'https://wiki'<br/>
      - Test case: when the link has been updated since the last round<br/>
      - Test case: when the link has not been updated since the last round<br/>


7. Test <code>`response_for_each_round`?</code> method:<br/>
==== Refactor the <code>`get_team_color`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/5f80916bb97cba9e54771c6799d6a71c854c8dda) ====
  - Test case: when all rounds have a response<br/>
  - Test case: when not all rounds have a response<br/>
  - Test case: when no rounds have a response<br/>


8. Test <code>`submitted_within_round`?</code> method:<br/>
  - Test case: when the round is greater than 1<br/>
    - Test case: returns true if a submission exists within the previous round's deadline and the current round's deadline<br/>
    - Test case: returns false if no submission exists within the previous round's deadline and the current round's deadline<br/>
  - Test case: when the round is 1<br/>
    - Test case: returns true if a submission exists within the assignment creation date and the current round's deadline<br/>
    - Test case: returns false if no submission exists within the assignment creation date and the current round's deadline<br/>


9. Test <code>`submitted_hyperlink`</code> method:<br/>
==== Refactor the <code>`check_submission_state`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/024533131a4ba11719449419a768122bbddbe497) ====
  - Test case: when there is a submission due date and a submission record<br/>
    - Test scenario 1: returns the content of the last submitted hyperlink within the assignment creation and due dates<br/>
    - Test scenario 2: returns nil if there is no submission due date or submission record for the reviewee team<br/>
    - Test scenario 3: returns nil if there is a submission due date but no submission record for the reviewee team<br/>
    - Test scenario 4: returns nil if there is a submission record but no submission due date for the reviewee team<br/>


10. Test <code>`last_modified_date_for_link`</code> method:<br/>
In the refactored code, the changes were made to limit the number of arguments passed to the check_submission_state method to a maximum of 4. This was achieved by grouping related arguments into a single hash argument.
    - Test case: when given a valid link<br/>
    - Test case: when given an invalid link<br/>


11. Test <code>`link_updated_since_last`?</code> method:<br/>
Here are the specific changes:
    - Test case: when the link was updated before the submission due date for the current round and after the submission due date for the previous round<br/>
    - Test case: when the link was updated after the submission due date for the current round<br/>
    - Test case: when the link was updated before the submission due date for the previous round<br/>


12. Test <code>`reviewed_link_name_for_team`</code> method:<br/>
- In the <code>`obtain_team_color`</code> method, instead of passing round and color as separate arguments to <code>`check_submission_state`</code>, a hash round_info is created with keys :round and :color. This hash is then passed as a single argument to <code>`check_submission_state`</code>. </br>
    - Test case: when max_team_size is 1<br/>
    - Test case: when max_team_size is not 1<br/>


13. Test <code>`warded_review_score`</code> method:<br/>
- In the <code>`check_submission_state`</code> method, the method signature has been updated to accept the round_info hash as a single argument instead of separate round and color arguments. </br>
    - Test case: when reviewer_id and team_id are valid<br/>
      - Test scenario 1: sets the correct awarded review score for each round<br/>
      - Test scenario 2: sets the correct awarded review score for each round<br/>
      - Test scenario 3: sets the correct awarded review score for each round<br/>
      - Test scenario 4: sets the correct awarded review score for each round<br/>
    - Test case: when team_id is nil or -1.0<br/>
      - Test scenario 1: does not update any instance variables<br/>
      - Test scenario 2: does not update any instance variables<br/>


14. Test <code>`review_metrics`</code> method:<br/>
- Inside the <code>`check_submission_state`</code> method, the round and color variables are extracted from the round_info hash using the <code>`values_at`</code> method. This allows the method to access the values of :round and :color from the hash. </br>
    - Test case: when given a round and team_id<br/>
      - Test case: sets max, min, and avg to '-----' as default values<br/>
      - Test case: sets max, min, and avg to the corresponding values from avg_and_ranges if present<br/>


15. Test <code>`sort_reviewers_by_average_volume`</code> method:<br/>
==== Refactor the <code>`isplay_tagging_interval_chart`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/bc83786f46e1f2e734563e9db82d820d6e086b12) ====
    - Test case: sorts the reviewers by the average volume of reviews in each round in descending order<br/>


16. Test <code>`sort_reviewer_by_review_volume_desc`</code> method:<br/>
The refactored code makes several changes to simplify and improve the readability of the display_tagging_interval_chart method. Here are the main changes: </br>
    - Test case: when there are reviewers and review volumes available<br/>
      - Test case: calculates the volume of review comments for each reviewer<br/>
      - Test case: sets the overall average volume of review comments for each reviewer<br/>
      - Test case: sets the average volume of review comments per round for each reviewer<br/>
      - Test case: sorts the reviewers by their review volume in descending order<br/>
      - Test case: gets the number of review rounds for the assignment<br/>
      - Test case: sets the average volume of review comments per round for all reviewers<br/>


17. Test <code>`initialize_chart_elements`</code> method:<br/>
- The unless block that checks if intervals is empty has been removed. Instead, the method now uses an early return statement to exit the method if intervals is empty. This simplifies the code and reduces nesting. </br>
    - Test case: when reviewer has data for all rounds<br/>
    - Test case: when reviewer has no data for any round<br/>
    - Test case: when reviewer has data for some rounds<br/>


18. Test <code>`display_volume_metric_chart`</code> method:<br/>
-The interval mean calculation has been simplified using the sum method instead of reduce(:+). This makes the code more concise and easier to understand. </br>
    - Test case: when given a reviewer<br/>
      - Test case: initializes chart elements<br/>
      - Test case: creates the data for the volume metric chart<br/>
      - Test case: creates the options for the volume metric chart<br/>
      - Test case: displays the volume metric chart<br/>


19. Test <code>`display_tagging_interval_chart`</code> method:<br/>
- The labels array in the data hash is now created using (1..intervals.length).to_a instead of [*1..intervals.length]. This achieves the same result but uses a more idiomatic Ruby syntax. </br>
    - Test case: when intervals are all below the threshold<br/>
    - Test case: when intervals contain some values above the threshold<br/>
      - Test case: filters out the values above the threshold<br/>
      - Test case: calculates the mean time spent for the intervals<br/>
      - Test case: displays a chart with the filtered intervals and mean time spent<br/>
    - Test case: when intervals are empty<br/>
      - Test case: does not calculate the mean time spent<br/>
      - Test case: displays a chart with no intervals<br/>


20. Test <code>`calculate_key_chart_information`</code> method:<br/>
- The conditional block that checks if intervals is empty within the datasets array has been removed. The refactored code always includes the "Mean time spent" dataset, even if intervals is empty. This simplifies the code and avoids the need for conditional logic within the datasets array. </br>
    - Test case: when given intervals are all above the threshold<br/>
    - Test case: when given intervals contain values below the threshold<br/>


21. Test <code>`calculate_mean`</code> method:<br/>
- The options hash has been reformatted to improve readability. The key-value pairs are now aligned vertically, making it easier to understand the structure of the options. </br>
    - Test case: when given an array of intervals and interval precision<br/>
      - Test scenario 1: returns the mean of the intervals rounded to the specified precision<br/>
      - Test scenario 2: returns the mean of the intervals rounded to the specified precision<br/>
      - Test scenario 3: returns the mean of the intervals rounded to the specified precision<br/>


22. Test <code>`calculate_variance`</code> method:<br/>
- The line_chart method call has been moved outside the method definition to make it clear that it is a separate step in the chart generation process. </br>
    - Test case: when given an array of intervals and interval precision<br/>
      - Test scenario 1: returns the variance of the intervals rounded to the specified precision<br/>
      - Test scenario 2: returns the variance of the intervals rounded to the specified precision<br/>
      - Test scenario 3: returns the variance of the intervals rounded to the specified precision<br/>
      - Test scenario 4: returns the variance of the intervals rounded to the specified precision<br/>


23. Test <code>`calculate_standard_deviation`</code> method:<br/>
== Test Plan ==
    - Test case: when given an array of intervals and interval precision<br/>
We intend to expand Project 4's test coverage by introducing more tests.
      - Test scenario 1: returns the standard deviation rounded to the specified precision<br/>
      - Test scenario 2: returns the standard deviation rounded to the specified precision<br/>
      - Test scenario 3: returns the standard deviation rounded to the specified precision<br/>


24. Test <code>`list_review_submissions`</code> method:<br/>
Throughout our Test-Driven Development (TDD) efforts, our group would systematically use the painstakingly constructed test skeletons of Vyshnavi Adusumelli and Mustafa Olmez. These painstakingly crafted skeletons will function as priceless blueprints, outlining the fundamental tests required for our unit. We thoroughly test our controller module by including these painstakingly created test skeletons into our workflow. With this approach, we would be able to investigate the workings of our controller in great detail, making sure that its actions are carefully reviewed and verified. Adding these test skeletons will improve our codebase's general quality and reliability while also providing a solid foundation for our unit tests.
    - Test case: when review submissions are available<br/>
    - Test case: when review submissions are not available<br/>


25. Test <code>`review_submissions_available`?</code> method:<br/>
'''Test Plan'''
    - Test case: when both team and participant are present<br/>
    - Test case: when team is nil and participant is present<br/>
    - Test case: when team is present and participant is nil<br/>
    - Test case: when both team and participant are nil<br/>


26. Test <code>`list_hyperlink_submission`</code> method:<br/>
1. Test <code>`review_metrics`</code> method:<br/>
     - Test case: when the response map ID and question ID are valid<br/>
     - Test case: when given a round and team_id<br/>
       - Test case: returns the HTML code for a hyperlink if the answer has a comment starting with 'http'<br/>
       - Test case: sets max, min, and avg to '-----' as default values<br/>
      - Test case: returns an empty string if the answer does not have a comment starting with 'http'<br/>
       - Test case: sets max, min, and avg to the corresponding values from avg_and_ranges if present<br/>
    - Test case: when the response map ID or question ID is invalid<br/>
       - Test case: returns an empty string if the response map ID is invalid<br/>
      - Test case: returns an empty string if the question ID is invalid<br/>


27. Test <code>`calculate_review_and_feedback_responses`</code> method:<br/>
2. Test <code>`check_submission_state`</code> method:<br/>
    - Test case: when author is a member of a team<br/>
  - Test case: when the submission is within the round<br/>
    - Test case: when author is not a member of a team<br/>
  - Test case: when the submission is not within the round<br/>
    - Test case: when the link is not provided or does not start with 'https://wiki'<br/>
    - Test case: when the link is provided and starts with 'https://wiki'<br/>
      - Test case: when the link has been updated since the last round<br/>
      - Test case: when the link has not been updated since the last round<br/>


28. Test <code>`feedback_response_map_record`</code> method:<br/>
3. Test <code>`display_volume_metric_chart`</code> method:<br/>
     - Test case: when author is provided<br/>
     - Test case: when given a reviewer<br/>
       - Test case: retrieves response records for each round<br/>
       - Test case: initializes chart elements<br/>
       - Test case: calculates feedback response map records for each round<br/>
      - Test case: creates the data for the volume metric chart<br/>
      - Test case: creates the options for the volume metric chart<br/>
      - Test case: displays the volume metric chart<br/>
4. Test <code>`sort_reviewer_by_review_volume_desc`</code> method:<br/>
    - Test case: when there are reviewers and review volumes available<br/>
      - Test case: calculates the volume of review comments for each reviewer<br/>
      - Test case: sets the overall average volume of review comments for each reviewer<br/>
      - Test case: sets the average volume of review comments per round for each reviewer<br/>
       - Test case: sorts the reviewers by their review volume in descending order<br/>
      - Test case: gets the number of review rounds for the assignment<br/>
      - Test case: sets the average volume of review comments per round for all reviewers<br/>


29. Test <code>`get_certain_review_and_feedback_response_map`</code> method:<br/>
5. Test <code>`get_certain_review_and_feedback_response_map`</code> method:<br/>
     - Test case: when author has feedback response maps<br/>
     - Test case: when author has feedback response maps<br/>
     - Test case: when author does not have feedback response maps<br/>
     - Test case: when author does not have feedback response maps<br/>
Line 265: Line 183:
     - Test case: when review responses exist<br/>
     - Test case: when review responses exist<br/>
     - Test case: when review responses do not exist<br/>
     - Test case: when review responses do not exist<br/>
30. Test <code>`css_class_for_calibration_report`</code> method:<br/>
    - Test case: when the difference is 0<br/>
    - Test case: when the difference is 1<br/>
    - Test case: when the difference is 2<br/>
    - Test case: when the difference is 3<br/>
    - Test case: when the difference is greater than 3<br/>
31. Test <code>`initialize`</code> method:<br/>
    - Test case: when initializing a new instance of the class<br/>
      - Test case: sets the participants attribute to the provided value<br/>
      - Test case: sets the teams attribute to the provided value<br/>
      - Test case: sets the review_num attribute to the provided value<br/>
32. Test <code>`reviews_per_team`</code> method:<br/>
    - Test case: when there are 10 participants, 5 teams, and each participant needs to review 2 times<br/>
    - Test case: when there are 20 participants, 4 teams, and each participant needs to review 3 times<br/>
    - Test case: when there are 8 participants, 2 teams, and each participant needs to review 4 times<br/>
33. Test <code>`reviews_needed`</code> method:<br/>
    - Test case: when there are no participants<br/>
    - Test case: when there is one participant and review number is 3<br/>
    - Test case: when there are three participants and review number is 2<br/>
    - Test case: when there are five participants and review number is 4<br/>
34. Test <code>`reviews_per_student`</code> method:<br/>
    - Test case: when there are no reviews<br/>
    - Test case: when there is only one review<br/>
    - Test case: when there are multiple reviews<br/>


== Design Pattern ==
== Design Pattern ==
During the code refactoring process, various design patterns were leveraged to enhance readability and maintainability. The commonly applied patterns include:
During the code refactoring process, various design patterns were leveraged to enhance readability and maintainability. The commonly applied patterns include:


1. Extract Method: Identifying lengthy and intricate methods and extracting segments of functionality into separate methods. </br>
  1. Extract Method: Identifying lengthy and intricate methods and extracting segments of functionality into separate methods. </br>
2. Refactoring Conditionals: Encapsulating the logic within conditional statements into distinct methods to streamline the code's flow. </br>
  2. Refactoring Conditionals: Encapsulating the logic within conditional statements into distinct methods to streamline the code's flow. </br>


These design patterns helped in making the code more comprehensible and easier to maintain.
These design patterns helped in making the code more comprehensible and easier to maintain.
Line 306: Line 195:
* '''Github Repository:''' https://github.com/sahilchanglani/expertiza
* '''Github Repository:''' https://github.com/sahilchanglani/expertiza
* '''Pull Request:''' https://github.com/expertiza/expertiza/pull/2764
* '''Pull Request:''' https://github.com/expertiza/expertiza/pull/2764
* '''Youtube Video:''' https://youtu.be/xyase3nuYxc


== Team ==
== Team ==

Latest revision as of 03:48, 24 April 2024

This wiki page describes changes made under the E2405 OODD assignment for Spring 2024, CSC/ECE 517.

Expertiza Background

Expertiza is an open-source online application developed using Ruby on Rails framework. It is maintained by the staff and students at NC State University. This application provides instructors with comprehensive control over managing tasks and assignments in their courses. Expertiza offers a wide range of powerful features, including peer review management, group formation, and subject addition capabilities. It is a versatile platform that can handle various types of assignments. For more detailed information about the extensive features offered by Expertiza, users can refer to the Expertiza wiki.

About Helper

The review_mapping_helper module in Ruby on Rails provides a set of helper methods to facilitate the peer review process in an assignment. It includes functionality for generating review reports, managing submission statuses, calculating review scores, and visualizing review metrics, but requires refactoring to improve code maintainability and readability.

Functionality of review_mapping_controller

The review_mapping_helper.rb file is a Ruby module that contains various helper methods related to the review mapping process in a peer review system. Here's a brief overview of the main functionalities provided by this module:

1. Generating review reports with data such as reviewer IDs, reviewed object IDs, and response types.
2. Determining team colors based on review status and assignment submission status.
3. Checking submission states within each round and assigning team colors accordingly.
4. Retrieving and displaying submitted hyperlinks and files for review.
5. Calculating awarded review scores and determining minimum, maximum, and average grade values.
6. Sorting reviewers based on the average volume of reviews in each round.
7. Generating and displaying charts for review volume metrics and tagging time intervals.
8. Retrieving and setting up review and feedback responses for feedback reports.
9. Determining CSS styles for the calibration report based on the difference between student and instructor answers.
10. Defining and calculating review strategies for students and teams, including reviews per team, reviews needed, and reviews per student.

Problem Statement

The review_mapping_helper is challenging for developers to understand and utilize effectively due to its length, complexity, and lack of comments. The controller should go through a thorough restructuring process to divide complex procedures into smaller, easier-to-manage parts. The refactoring effort should also focus on:

1. Addressing cases of code duplication
2. Combining redundant code segments into reusable functions or utility methods
3. Improving naming conventions for methods and variables
4. Tackling necessary code changes for better readability and maintainability

Tasks

- Refactor the file to reduce the overall lines of code to be within the allowed limit of 250 lines.
- Refactor the `display_volume_metric_chart` method to reduce its lines of code to be within the allowed limit of 25 lines.
- Refactor the `display_tagging_interval_chart` method to reduce its lines of code to be within the allowed limit of 25 lines.
- Refactor the `check_submission_state` method to reduce its Cognitive Complexity to be within the allowed limit of 5 and reduce the number of arguments to be within the allowed limit of 4.
- Refactor the `sort_reviewer_by_review_volume_desc` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Refactor the `review_metrics` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Refactor the `get_team_color` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Reduce the number of arguments for the `check_submission_state` method from 5 to 4.

Phase 1

For Phase 1 of the project we have focused working on the below mentioned issues:
- Refactor the `display_volume_metric_chart` method.
- Refactor the `review_metrics` method.
- Commented the code.
- Fixed Code climate issues.

Phase 2

For Phase 2 of the project we plan working on the below mentioned issues:
- Refactor the `display_tagging_interval_chart` method.
- Refactor the `check_submission_state` method.
- Refactor the `sort_reviewer_by_review_volume_desc` method.
- Refactor the `get_team_color` method.
- Reduce the number of arguments for the `check_submission_state` method.
- Increase the test coverage.
- Increase code readability.

Implementation

Phase 1

Refactor the `display_volume_metric_chart` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/8980cb03531543563a9ac023c6a5a801e3ecc709)

The `display_volume_metric_chart` method has been modified to address the issue of reducing its length to 25 lines. The changes made are as follows:

- The method now focuses solely on preparing the data and options for the chart and rendering the chart using the `bar_chart` method.
- The logic for preparing the chart data has been extracted into a separate method called `prepare_chart_data`. This method takes the labels, reviewer_data, and all_reviewers_data as parameters and returns a hash containing the formatted data required for the chart.
- Similarly, the logic for preparing the chart options has been moved to a separate method called `prepare_chart_options`. This method returns a hash containing the configuration options for the chart, such as legend settings, width, height, and axis properties.
- By extracting the data preparation and options configuration into separate methods, the `display_volume_metric_chart` method becomes more concise and focused on its main responsibility of displaying the chart.
- The `prepare_chart_data` method constructs the hash structure required by the charting library, including the labels and datasets. It sets the label, background color, border width, data, and yAxisID for each dataset.
- The `prepare_chart_options` method defines the options for the chart, such as the legend position and style, chart dimensions, and axis configurations. It specifies the stacking, thickness, and other properties for the y-axes and x-axis.

Refactor the `review_metrics` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/6afd790fdc8765a2e60641b7edff388692fd069f)

Several changes have been made to reduce the cyclomatic complexity of the review_metrics method from 6 to 5. Let's go through the changes:

- The array of metrics %i[max min avg] is assigned to a variable metrics at the beginning of the method for better readability and reusability.
- The code for initializing the metrics with the default value '-----' has been extracted into a separate private method called `initialize_metrics`. This method iterates over the metrics array and sets the corresponding instance variables using string interpolation.
- The condition for checking if the team data is available has been moved to a separate private method called `team_data_available?`. This method takes team_id, round, and metrics as parameters and returns a boolean value indicating whether the data is available for all metrics in the given round for the specified team.
- The code for updating the metrics based on the available data has been moved to a separate private method called `update_metrics`. This method iterates over the metrics array and updates the corresponding instance variables with the metric values fetched from the @avg_and_ranges hash.
- The logic for fetching the metric value has been extracted into a separate private method called `fetch_metric_value`. This method takes team_id, round, and metric as parameters and returns the formatted metric value. If the value is nil, it returns '-----'; otherwise, it rounds the value to 0 decimal places and appends a '%' symbol.
- The `review_metrics` method now has a more linear flow. It initializes the metrics using `initialize_metrics`, checks if the team data is available using `team_data_available?`, and updates the metrics using `update_metrics` if the data is available. If the data is not available, the method returns early without updating the metrics.

Phase 2

Refactor the `check_submission_state` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/c53a1b0db0c959f02d5fdf56d7f8490f73598333)

The main changes made to reduce the cyclomatic complexity of the `check_submission_state` method are:

- The if-else conditional statements in the original code have been replaced with a case statement in the refactored code. The case statement uses the return value of the `submission_status`` method to determine the appropriate action.

- The logic for determining the submission status has been extracted into a separate method called `submission_status`. This method encapsulates the logic for checking if a submission was made within the round, if a link was submitted, and if the link format is invalid.

- The code for retrieving the submitted hyperlink has been moved into a separate method called `submission_link`. This method is called when needed instead of being inlined in the check_submission_state method.

- The condition for checking the link format validity has been moved into a separate method called `invalid_link_format?`. This method is called within the `submission_status` method to determine if the link format is invalid.

Refactor the `sort_reviewer_by_review_volume_desc` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/2c853789b1dbf20c97907effbd9ac7593da1c4a7)

In the refactored code, several changes were made to reduce the cognitive complexity of the sort_reviewer_by_review_volume_desc method:

The method has been split into smaller, more focused methods:

- `calculate_review_volumes`: Calculates the review volumes for each reviewer.

- `calculate_overall_averages`: Calculates the overall average review volume across all reviewers.

- `calculate_round_averages`: Calculates the average review volume for each round across all reviewers.

- `sort_reviewers_by_overall_average`: Sorts the reviewers in descending order based on their overall average review volume.

Refactor the `get_team_color` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/5f80916bb97cba9e54771c6799d6a71c854c8dda)

Refactor the `check_submission_state` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/024533131a4ba11719449419a768122bbddbe497)

In the refactored code, the changes were made to limit the number of arguments passed to the check_submission_state method to a maximum of 4. This was achieved by grouping related arguments into a single hash argument.

Here are the specific changes:

- In the `obtain_team_color` method, instead of passing round and color as separate arguments to `check_submission_state`, a hash round_info is created with keys :round and :color. This hash is then passed as a single argument to `check_submission_state`.

- In the `check_submission_state` method, the method signature has been updated to accept the round_info hash as a single argument instead of separate round and color arguments.

- Inside the `check_submission_state` method, the round and color variables are extracted from the round_info hash using the `values_at` method. This allows the method to access the values of :round and :color from the hash.

Refactor the `isplay_tagging_interval_chart` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/bc83786f46e1f2e734563e9db82d820d6e086b12)

The refactored code makes several changes to simplify and improve the readability of the display_tagging_interval_chart method. Here are the main changes:

- The unless block that checks if intervals is empty has been removed. Instead, the method now uses an early return statement to exit the method if intervals is empty. This simplifies the code and reduces nesting.

-The interval mean calculation has been simplified using the sum method instead of reduce(:+). This makes the code more concise and easier to understand.

- The labels array in the data hash is now created using (1..intervals.length).to_a instead of [*1..intervals.length]. This achieves the same result but uses a more idiomatic Ruby syntax.

- The conditional block that checks if intervals is empty within the datasets array has been removed. The refactored code always includes the "Mean time spent" dataset, even if intervals is empty. This simplifies the code and avoids the need for conditional logic within the datasets array.

- The options hash has been reformatted to improve readability. The key-value pairs are now aligned vertically, making it easier to understand the structure of the options.

- The line_chart method call has been moved outside the method definition to make it clear that it is a separate step in the chart generation process.

Test Plan

We intend to expand Project 4's test coverage by introducing more tests.

Throughout our Test-Driven Development (TDD) efforts, our group would systematically use the painstakingly constructed test skeletons of Vyshnavi Adusumelli and Mustafa Olmez. These painstakingly crafted skeletons will function as priceless blueprints, outlining the fundamental tests required for our unit. We thoroughly test our controller module by including these painstakingly created test skeletons into our workflow. With this approach, we would be able to investigate the workings of our controller in great detail, making sure that its actions are carefully reviewed and verified. Adding these test skeletons will improve our codebase's general quality and reliability while also providing a solid foundation for our unit tests.

Test Plan

1. Test `review_metrics` method:

   - Test case: when given a round and team_id
- Test case: sets max, min, and avg to '-----' as default values
- Test case: sets max, min, and avg to the corresponding values from avg_and_ranges if present

2. Test `check_submission_state` method:

  - Test case: when the submission is within the round
- Test case: when the submission is not within the round
- Test case: when the link is not provided or does not start with 'https://wiki'
- Test case: when the link is provided and starts with 'https://wiki'
- Test case: when the link has been updated since the last round
- Test case: when the link has not been updated since the last round

3. Test `display_volume_metric_chart` method:

   - Test case: when given a reviewer
- Test case: initializes chart elements
- Test case: creates the data for the volume metric chart
- Test case: creates the options for the volume metric chart
- Test case: displays the volume metric chart

4. Test `sort_reviewer_by_review_volume_desc` method:

   - Test case: when there are reviewers and review volumes available
- Test case: calculates the volume of review comments for each reviewer
- Test case: sets the overall average volume of review comments for each reviewer
- Test case: sets the average volume of review comments per round for each reviewer
- Test case: sorts the reviewers by their review volume in descending order
- Test case: gets the number of review rounds for the assignment
- Test case: sets the average volume of review comments per round for all reviewers

5. Test `get_certain_review_and_feedback_response_map` method:

   - Test case: when author has feedback response maps
- Test case: when author does not have feedback response maps
- Test case: when review response maps exist for the given reviewed object and reviewee
- Test case: when review response maps do not exist for the given reviewed object and reviewee
- Test case: when review responses exist for the given review response map ids
- Test case: when review responses do not exist for the given review response map ids
- Test case: when review responses exist
- Test case: when review responses do not exist

Design Pattern

During the code refactoring process, various design patterns were leveraged to enhance readability and maintainability. The commonly applied patterns include:

 1. Extract Method: Identifying lengthy and intricate methods and extracting segments of functionality into separate methods. 
2. Refactoring Conditionals: Encapsulating the logic within conditional statements into distinct methods to streamline the code's flow.

These design patterns helped in making the code more comprehensible and easier to maintain.

Relevant Links

Team

Mentor

  • Ananya Mantravadi (amantra)

Team Members

  • Sahil Changlani (schangl)
  • Rushil Vegada (rvegada)
  • Romil Shah (rmshah3)