CSC/ECE 517 Spring 2024 - E2405 Refactor review mapping helper.rb: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 55: Line 55:
=== Phase 1 ===
=== Phase 1 ===


==== Refactor the <code>`display_volume_metric_chart`</code> method ====
==== Refactor the <code>`display_volume_metric_chart`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/8980cb03531543563a9ac023c6a5a801e3ecc709) ====
The <code>`display_volume_metric_chart`</code> method has been modified to address the issue of reducing its length to 25 lines. The changes made are as follows:
The <code>`display_volume_metric_chart`</code> method has been modified to address the issue of reducing its length to 25 lines. The changes made are as follows:


Line 65: Line 65:
- The <code>`prepare_chart_options`</code> method defines the options for the chart, such as the legend position and style, chart dimensions, and axis configurations. It specifies the stacking, thickness, and other properties for the y-axes and x-axis.<br/>
- The <code>`prepare_chart_options`</code> method defines the options for the chart, such as the legend position and style, chart dimensions, and axis configurations. It specifies the stacking, thickness, and other properties for the y-axes and x-axis.<br/>


==== Refactor the <code>`review_metrics`</code> method ====
==== Refactor the <code>`review_metrics`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/6afd790fdc8765a2e60641b7edff388692fd069f) ====
Several changes have been made to reduce the cyclomatic complexity of the review_metrics method from 6 to 5. Let's go through the changes:
Several changes have been made to reduce the cyclomatic complexity of the review_metrics method from 6 to 5. Let's go through the changes:



Revision as of 17:40, 27 March 2024

This wiki page describes changes made under the E2405 OODD assignment for Spring 2024, CSC/ECE 517.

Expertiza Background

Expertiza is an open-source online application developed using Ruby on Rails framework. It is maintained by the staff and students at NC State University. This application provides instructors with comprehensive control over managing tasks and assignments in their courses. Expertiza offers a wide range of powerful features, including peer review management, group formation, and subject addition capabilities. It is a versatile platform that can handle various types of assignments. For more detailed information about the extensive features offered by Expertiza, users can refer to the Expertiza wiki.

About Helper

The review_mapping_helper module in Ruby on Rails provides a set of helper methods to facilitate the peer review process in an assignment. It includes functionality for generating review reports, managing submission statuses, calculating review scores, and visualizing review metrics, but requires refactoring to improve code maintainability and readability.

Functionality of review_mapping_controller

The review_mapping_helper.rb file is a Ruby module that contains various helper methods related to the review mapping process in a peer review system. Here's a brief overview of the main functionalities provided by this module:

1. Generating review reports with data such as reviewer IDs, reviewed object IDs, and response types.
2. Determining team colors based on review status and assignment submission status.
3. Checking submission states within each round and assigning team colors accordingly.
4. Retrieving and displaying submitted hyperlinks and files for review.
5. Calculating awarded review scores and determining minimum, maximum, and average grade values.
6. Sorting reviewers based on the average volume of reviews in each round.
7. Generating and displaying charts for review volume metrics and tagging time intervals.
8. Retrieving and setting up review and feedback responses for feedback reports.
9. Determining CSS styles for the calibration report based on the difference between student and instructor answers.
10. Defining and calculating review strategies for students and teams, including reviews per team, reviews needed, and reviews per student.

Problem Statement

Because of its length, complexity, and lack of comments, the review_mapping_helper is challenging for developers to understand and utilize effectively. The controller should go through a thorough restructuring process to remedy this, with the goal of dividing complex procedures into smaller, easier-to-manage parts. To do this, complex logic must be broken down into modular parts, each of which is in charge of a particular task or subtask that contributes to the overall functioning. In order to improve maintainability and lower the chance of errors, the refactoring effort should also focus on cases of code duplication, combining redundant code segments into reusable functions or utility methods. Developers can better grasp the inner workings of the controller codebase and make maintenance, debugging, and future additions easier by methodically rearranging the codebase and increasing its documentation.

Tasks

- Refactor the file to reduce the overall lines of code to be within the allowed limit of 250 lines.
- Refactor the `display_volume_metric_chart` method to reduce its lines of code to be within the allowed limit of 25 lines.
- Refactor the `display_tagging_interval_chart` method to reduce its lines of code to be within the allowed limit of 25 lines.
- Refactor the `check_submission_state` method to reduce its Cognitive Complexity to be within the allowed limit of 5 and reduce the number of arguments to be within the allowed limit of 4.
- Refactor the `sort_reviewer_by_review_volume_desc` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Refactor the `review_metrics` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Refactor the `get_team_color` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Reduce the number of arguments for the `check_submission_state` method from 5 to 4.

Phase 1

For Phase 1 of the project we have focused working on the below mentioned issues:
- Refactor the `display_volume_metric_chart` method.
- Refactor the `review_metrics` method.
- Commented the code.
- Fixed Code climate issues.

Phase 2

For Phase 2 of the project we plan working on the below mentioned issues:
- Refactor the `display_tagging_interval_chart` method.
- Refactor the `check_submission_state` method.
- Refactor the `sort_reviewer_by_review_volume_desc` method.
- Refactor the `get_team_color` method.
- Reduce the number of arguments for the `check_submission_state` method.
- Increase the test coverage.
- Increase code readability.

Implementation

Phase 1

Refactor the `display_volume_metric_chart` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/8980cb03531543563a9ac023c6a5a801e3ecc709)

The `display_volume_metric_chart` method has been modified to address the issue of reducing its length to 25 lines. The changes made are as follows:

- The method now focuses solely on preparing the data and options for the chart and rendering the chart using the `bar_chart` method.
- The logic for preparing the chart data has been extracted into a separate method called `prepare_chart_data`. This method takes the labels, reviewer_data, and all_reviewers_data as parameters and returns a hash containing the formatted data required for the chart.
- Similarly, the logic for preparing the chart options has been moved to a separate method called `prepare_chart_options`. This method returns a hash containing the configuration options for the chart, such as legend settings, width, height, and axis properties.
- By extracting the data preparation and options configuration into separate methods, the `display_volume_metric_chart` method becomes more concise and focused on its main responsibility of displaying the chart.
- The `prepare_chart_data` method constructs the hash structure required by the charting library, including the labels and datasets. It sets the label, background color, border width, data, and yAxisID for each dataset.
- The `prepare_chart_options` method defines the options for the chart, such as the legend position and style, chart dimensions, and axis configurations. It specifies the stacking, thickness, and other properties for the y-axes and x-axis.

Refactor the `review_metrics` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/6afd790fdc8765a2e60641b7edff388692fd069f)

Several changes have been made to reduce the cyclomatic complexity of the review_metrics method from 6 to 5. Let's go through the changes:

- The array of metrics %i[max min avg] is assigned to a variable metrics at the beginning of the method for better readability and reusability.
- The code for initializing the metrics with the default value '-----' has been extracted into a separate private method called `initialize_metrics`. This method iterates over the metrics array and sets the corresponding instance variables using string interpolation.
- The condition for checking if the team data is available has been moved to a separate private method called `team_data_available?`. This method takes team_id, round, and metrics as parameters and returns a boolean value indicating whether the data is available for all metrics in the given round for the specified team.
- The code for updating the metrics based on the available data has been moved to a separate private method called `update_metrics`. This method iterates over the metrics array and updates the corresponding instance variables with the metric values fetched from the @avg_and_ranges hash.
- The logic for fetching the metric value has been extracted into a separate private method called `fetch_metric_value`. This method takes team_id, round, and metric as parameters and returns the formatted metric value. If the value is nil, it returns '-----'; otherwise, it rounds the value to 0 decimal places and appends a '%' symbol.
- The `review_metrics` method now has a more linear flow. It initializes the metrics using `initialize_metrics`, checks if the team data is available using `team_data_available?`, and updates the metrics using `update_metrics` if the data is available. If the data is not available, the method returns early without updating the metrics.

Test Plan

We intend to expand Project 4's test coverage by introducing more tests.

Throughout our Test-Driven Development (TDD) efforts, our group would systematically use the painstakingly constructed test skeletons of Vyshnavi Adusumelli and Mustafa Olmez. These painstakingly crafted skeletons will function as priceless blueprints, outlining the fundamental tests required for our unit. We thoroughly test our controller module by including these painstakingly created test skeletons into our workflow. With this approach, we would be able to investigate the workings of our controller in great detail, making sure that its actions are carefully reviewed and verified. Adding these test skeletons will improve our codebase's general quality and reliability while also providing a solid foundation for our unit tests.

Test Plan

1. Test `create_report_table_header` method:

  - Test case: when headers are provided
- Test case: when no headers are provided

2. Test `review_report_data` method:

  - Test case: when valid parameters are provided
- Test scenario 1: when there are response maps for the given reviewed object, reviewer, and type
- Test scenario 2: when there are no response maps for the given reviewed object, reviewer, and type
- Test case: when invalid parameters are provided
- Test scenario 1: when the reviewed object does not exist
- Test scenario 2: when the reviewer does not exist
- Test scenario 3: when the type is invalid

3. Test `calculate_response_counts` method:

  - Test case: when given an empty response_maps array
- Test case: when given response_maps with no responses for any round
- Test case: when given response_maps with responses for all rounds
- Test case: when given response_maps with responses for some rounds

4. Test `get_team_color` method:

  - Test case: when a response exists for the response map
- Test case: when no response exists for the response map
- Test case: calls obtain_team_color method with the correct arguments

5. Test `obtain_team_color` method:

  - Test case: when there is only one round of review
- Test case: when there are multiple rounds of review
- Test case: when there are no rounds of review

6. Test `check_submission_state` method:

  - Test case: when the submission is within the round
- Test case: when the submission is not within the round
- Test case: when the link is not provided or does not start with 'https://wiki'
- Test case: when the link is provided and starts with 'https://wiki'
- Test case: when the link has been updated since the last round
- Test case: when the link has not been updated since the last round

7. Test `response_for_each_round`? method:

  - Test case: when all rounds have a response
- Test case: when not all rounds have a response
- Test case: when no rounds have a response

8. Test `submitted_within_round`? method:

  - Test case: when the round is greater than 1
- Test case: returns true if a submission exists within the previous round's deadline and the current round's deadline
- Test case: returns false if no submission exists within the previous round's deadline and the current round's deadline
- Test case: when the round is 1
- Test case: returns true if a submission exists within the assignment creation date and the current round's deadline
- Test case: returns false if no submission exists within the assignment creation date and the current round's deadline

9. Test `submitted_hyperlink` method:

  - Test case: when there is a submission due date and a submission record
- Test scenario 1: returns the content of the last submitted hyperlink within the assignment creation and due dates
- Test scenario 2: returns nil if there is no submission due date or submission record for the reviewee team
- Test scenario 3: returns nil if there is a submission due date but no submission record for the reviewee team
- Test scenario 4: returns nil if there is a submission record but no submission due date for the reviewee team

10. Test `last_modified_date_for_link` method:

   - Test case: when given a valid link
- Test case: when given an invalid link

11. Test `link_updated_since_last`? method:

   - Test case: when the link was updated before the submission due date for the current round and after the submission due date for the previous round
- Test case: when the link was updated after the submission due date for the current round
- Test case: when the link was updated before the submission due date for the previous round

12. Test `reviewed_link_name_for_team` method:

   - Test case: when max_team_size is 1
- Test case: when max_team_size is not 1

13. Test `warded_review_score` method:

   - Test case: when reviewer_id and team_id are valid
- Test scenario 1: sets the correct awarded review score for each round
- Test scenario 2: sets the correct awarded review score for each round
- Test scenario 3: sets the correct awarded review score for each round
- Test scenario 4: sets the correct awarded review score for each round
- Test case: when team_id is nil or -1.0
- Test scenario 1: does not update any instance variables
- Test scenario 2: does not update any instance variables

14. Test `review_metrics` method:

   - Test case: when given a round and team_id
- Test case: sets max, min, and avg to '-----' as default values
- Test case: sets max, min, and avg to the corresponding values from avg_and_ranges if present

15. Test `sort_reviewers_by_average_volume` method:

   - Test case: sorts the reviewers by the average volume of reviews in each round in descending order

16. Test `sort_reviewer_by_review_volume_desc` method:

   - Test case: when there are reviewers and review volumes available
- Test case: calculates the volume of review comments for each reviewer
- Test case: sets the overall average volume of review comments for each reviewer
- Test case: sets the average volume of review comments per round for each reviewer
- Test case: sorts the reviewers by their review volume in descending order
- Test case: gets the number of review rounds for the assignment
- Test case: sets the average volume of review comments per round for all reviewers

17. Test `initialize_chart_elements` method:

   - Test case: when reviewer has data for all rounds
- Test case: when reviewer has no data for any round
- Test case: when reviewer has data for some rounds

18. Test `display_volume_metric_chart` method:

   - Test case: when given a reviewer
- Test case: initializes chart elements
- Test case: creates the data for the volume metric chart
- Test case: creates the options for the volume metric chart
- Test case: displays the volume metric chart

19. Test `display_tagging_interval_chart` method:

   - Test case: when intervals are all below the threshold
- Test case: when intervals contain some values above the threshold
- Test case: filters out the values above the threshold
- Test case: calculates the mean time spent for the intervals
- Test case: displays a chart with the filtered intervals and mean time spent
- Test case: when intervals are empty
- Test case: does not calculate the mean time spent
- Test case: displays a chart with no intervals

20. Test `calculate_key_chart_information` method:

   - Test case: when given intervals are all above the threshold
- Test case: when given intervals contain values below the threshold

21. Test `calculate_mean` method:

   - Test case: when given an array of intervals and interval precision
- Test scenario 1: returns the mean of the intervals rounded to the specified precision
- Test scenario 2: returns the mean of the intervals rounded to the specified precision
- Test scenario 3: returns the mean of the intervals rounded to the specified precision

22. Test `calculate_variance` method:

   - Test case: when given an array of intervals and interval precision
- Test scenario 1: returns the variance of the intervals rounded to the specified precision
- Test scenario 2: returns the variance of the intervals rounded to the specified precision
- Test scenario 3: returns the variance of the intervals rounded to the specified precision
- Test scenario 4: returns the variance of the intervals rounded to the specified precision

23. Test `calculate_standard_deviation` method:

   - Test case: when given an array of intervals and interval precision
- Test scenario 1: returns the standard deviation rounded to the specified precision
- Test scenario 2: returns the standard deviation rounded to the specified precision
- Test scenario 3: returns the standard deviation rounded to the specified precision

24. Test `list_review_submissions` method:

   - Test case: when review submissions are available
- Test case: when review submissions are not available

25. Test `review_submissions_available`? method:

   - Test case: when both team and participant are present
- Test case: when team is nil and participant is present
- Test case: when team is present and participant is nil
- Test case: when both team and participant are nil

26. Test `list_hyperlink_submission` method:

   - Test case: when the response map ID and question ID are valid
- Test case: returns the HTML code for a hyperlink if the answer has a comment starting with 'http'
- Test case: returns an empty string if the answer does not have a comment starting with 'http'
- Test case: when the response map ID or question ID is invalid
- Test case: returns an empty string if the response map ID is invalid
- Test case: returns an empty string if the question ID is invalid

27. Test `calculate_review_and_feedback_responses` method:

   - Test case: when author is a member of a team
- Test case: when author is not a member of a team

28. Test `feedback_response_map_record` method:

   - Test case: when author is provided
- Test case: retrieves response records for each round
- Test case: calculates feedback response map records for each round

29. Test `get_certain_review_and_feedback_response_map` method:

   - Test case: when author has feedback response maps
- Test case: when author does not have feedback response maps
- Test case: when review response maps exist for the given reviewed object and reviewee
- Test case: when review response maps do not exist for the given reviewed object and reviewee
- Test case: when review responses exist for the given review response map ids
- Test case: when review responses do not exist for the given review response map ids
- Test case: when review responses exist
- Test case: when review responses do not exist

30. Test `css_class_for_calibration_report` method:

   - Test case: when the difference is 0
- Test case: when the difference is 1
- Test case: when the difference is 2
- Test case: when the difference is 3
- Test case: when the difference is greater than 3

31. Test `initialize` method:

   - Test case: when initializing a new instance of the class
- Test case: sets the participants attribute to the provided value
- Test case: sets the teams attribute to the provided value
- Test case: sets the review_num attribute to the provided value

32. Test `reviews_per_team` method:

   - Test case: when there are 10 participants, 5 teams, and each participant needs to review 2 times
- Test case: when there are 20 participants, 4 teams, and each participant needs to review 3 times
- Test case: when there are 8 participants, 2 teams, and each participant needs to review 4 times

33. Test `reviews_needed` method:

   - Test case: when there are no participants
- Test case: when there is one participant and review number is 3
- Test case: when there are three participants and review number is 2
- Test case: when there are five participants and review number is 4

34. Test `reviews_per_student` method:

   - Test case: when there are no reviews
- Test case: when there is only one review
- Test case: when there are multiple reviews

Design Pattern

During our code refactoring process, we leveraged various design patterns to enhance readability and maintainability. One commonly applied pattern was "Extract Method," where we identified lengthy and intricate methods and extracted segments of functionality into separate methods. This restructuring made the code more comprehensible and easier to grasp by isolating specific tasks within dedicated methods.

Additionally, we addressed the issue of excessive conditional statements by employing the "Refactoring Conditionals" design pattern. Instead of cluttering the code with numerous conditionals, we refactored by encapsulating the logic within these conditionals into distinct methods. By doing so, we streamlined the code's flow and improved its readability, making it clearer to understand the purpose and execution of each segment.

Relevant Links

Team

Mentor

  • Ananya Mantravadi (amantra)

Team Members

  • Sahil Changlani (schangl)
  • Rushil Vegada (rvegada)
  • Romil Shah (rmshah3)