CSC/ECE 517 Spring 2024 - E2405 Refactor review mapping helper.rb: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
Line 84: Line 84:
==== Refactor the <code>`check_submission_state`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/c53a1b0db0c959f02d5fdf56d7f8490f73598333) ====
==== Refactor the <code>`check_submission_state`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/c53a1b0db0c959f02d5fdf56d7f8490f73598333) ====


-  
The main changes made to reduce the cyclomatic complexity of the <code>`check_submission_state`</code> method are:
 
- The if-else conditional statements in the original code have been replaced with a case statement in the refactored code. The case statement uses the return value of the <code>`submission_status``</code> method to determine the appropriate action. </br>
 
- The logic for determining the submission status has been extracted into a separate method called <code>`submission_status`</code>. This method encapsulates the logic for checking if a submission was made within the round, if a link was submitted, and if the link format is invalid. </br>
 
- The code for retrieving the submitted hyperlink has been moved into a separate method called <code>`submission_link`</code>. This method is called when needed instead of being inlined in the check_submission_state method. </br>
 
- The condition for checking the link format validity has been moved into a separate method called <code>`invalid_link_format?`</code>. This method is called within the <code>`submission_status`</code> method to determine if the link format is invalid. </br>


==== Refactor the <code>`sort_reviewer_by_review_volume_desc`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/2c853789b1dbf20c97907effbd9ac7593da1c4a7) ====
==== Refactor the <code>`sort_reviewer_by_review_volume_desc`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/2c853789b1dbf20c97907effbd9ac7593da1c4a7) ====

Revision as of 19:55, 9 April 2024

This wiki page describes changes made under the E2405 OODD assignment for Spring 2024, CSC/ECE 517.

Expertiza Background

Expertiza is an open-source online application developed using Ruby on Rails framework. It is maintained by the staff and students at NC State University. This application provides instructors with comprehensive control over managing tasks and assignments in their courses. Expertiza offers a wide range of powerful features, including peer review management, group formation, and subject addition capabilities. It is a versatile platform that can handle various types of assignments. For more detailed information about the extensive features offered by Expertiza, users can refer to the Expertiza wiki.

About Helper

The review_mapping_helper module in Ruby on Rails provides a set of helper methods to facilitate the peer review process in an assignment. It includes functionality for generating review reports, managing submission statuses, calculating review scores, and visualizing review metrics, but requires refactoring to improve code maintainability and readability.

Functionality of review_mapping_controller

The review_mapping_helper.rb file is a Ruby module that contains various helper methods related to the review mapping process in a peer review system. Here's a brief overview of the main functionalities provided by this module:

1. Generating review reports with data such as reviewer IDs, reviewed object IDs, and response types.
2. Determining team colors based on review status and assignment submission status.
3. Checking submission states within each round and assigning team colors accordingly.
4. Retrieving and displaying submitted hyperlinks and files for review.
5. Calculating awarded review scores and determining minimum, maximum, and average grade values.
6. Sorting reviewers based on the average volume of reviews in each round.
7. Generating and displaying charts for review volume metrics and tagging time intervals.
8. Retrieving and setting up review and feedback responses for feedback reports.
9. Determining CSS styles for the calibration report based on the difference between student and instructor answers.
10. Defining and calculating review strategies for students and teams, including reviews per team, reviews needed, and reviews per student.

Problem Statement

The review_mapping_helper is challenging for developers to understand and utilize effectively due to its length, complexity, and lack of comments. The controller should go through a thorough restructuring process to divide complex procedures into smaller, easier-to-manage parts. The refactoring effort should also focus on:

1. Addressing cases of code duplication
2. Combining redundant code segments into reusable functions or utility methods
3. Improving naming conventions for methods and variables
4. Tackling necessary code changes for better readability and maintainability

Tasks

- Refactor the file to reduce the overall lines of code to be within the allowed limit of 250 lines.
- Refactor the `display_volume_metric_chart` method to reduce its lines of code to be within the allowed limit of 25 lines.
- Refactor the `display_tagging_interval_chart` method to reduce its lines of code to be within the allowed limit of 25 lines.
- Refactor the `check_submission_state` method to reduce its Cognitive Complexity to be within the allowed limit of 5 and reduce the number of arguments to be within the allowed limit of 4.
- Refactor the `sort_reviewer_by_review_volume_desc` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Refactor the `review_metrics` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Refactor the `get_team_color` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Reduce the number of arguments for the `check_submission_state` method from 5 to 4.

Phase 1

For Phase 1 of the project we have focused working on the below mentioned issues:
- Refactor the `display_volume_metric_chart` method.
- Refactor the `review_metrics` method.
- Commented the code.
- Fixed Code climate issues.

Phase 2

For Phase 2 of the project we plan working on the below mentioned issues:
- Refactor the `display_tagging_interval_chart` method.
- Refactor the `check_submission_state` method.
- Refactor the `sort_reviewer_by_review_volume_desc` method.
- Refactor the `get_team_color` method.
- Reduce the number of arguments for the `check_submission_state` method.
- Increase the test coverage.
- Increase code readability.

Implementation

Phase 1

Refactor the `display_volume_metric_chart` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/8980cb03531543563a9ac023c6a5a801e3ecc709)

The `display_volume_metric_chart` method has been modified to address the issue of reducing its length to 25 lines. The changes made are as follows:

- The method now focuses solely on preparing the data and options for the chart and rendering the chart using the `bar_chart` method.
- The logic for preparing the chart data has been extracted into a separate method called `prepare_chart_data`. This method takes the labels, reviewer_data, and all_reviewers_data as parameters and returns a hash containing the formatted data required for the chart.
- Similarly, the logic for preparing the chart options has been moved to a separate method called `prepare_chart_options`. This method returns a hash containing the configuration options for the chart, such as legend settings, width, height, and axis properties.
- By extracting the data preparation and options configuration into separate methods, the `display_volume_metric_chart` method becomes more concise and focused on its main responsibility of displaying the chart.
- The `prepare_chart_data` method constructs the hash structure required by the charting library, including the labels and datasets. It sets the label, background color, border width, data, and yAxisID for each dataset.
- The `prepare_chart_options` method defines the options for the chart, such as the legend position and style, chart dimensions, and axis configurations. It specifies the stacking, thickness, and other properties for the y-axes and x-axis.

Refactor the `review_metrics` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/6afd790fdc8765a2e60641b7edff388692fd069f)

Several changes have been made to reduce the cyclomatic complexity of the review_metrics method from 6 to 5. Let's go through the changes:

- The array of metrics %i[max min avg] is assigned to a variable metrics at the beginning of the method for better readability and reusability.
- The code for initializing the metrics with the default value '-----' has been extracted into a separate private method called `initialize_metrics`. This method iterates over the metrics array and sets the corresponding instance variables using string interpolation.
- The condition for checking if the team data is available has been moved to a separate private method called `team_data_available?`. This method takes team_id, round, and metrics as parameters and returns a boolean value indicating whether the data is available for all metrics in the given round for the specified team.
- The code for updating the metrics based on the available data has been moved to a separate private method called `update_metrics`. This method iterates over the metrics array and updates the corresponding instance variables with the metric values fetched from the @avg_and_ranges hash.
- The logic for fetching the metric value has been extracted into a separate private method called `fetch_metric_value`. This method takes team_id, round, and metric as parameters and returns the formatted metric value. If the value is nil, it returns '-----'; otherwise, it rounds the value to 0 decimal places and appends a '%' symbol.
- The `review_metrics` method now has a more linear flow. It initializes the metrics using `initialize_metrics`, checks if the team data is available using `team_data_available?`, and updates the metrics using `update_metrics` if the data is available. If the data is not available, the method returns early without updating the metrics.

Phase 2

Refactor the `check_submission_state` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/c53a1b0db0c959f02d5fdf56d7f8490f73598333)

The main changes made to reduce the cyclomatic complexity of the `check_submission_state` method are:

- The if-else conditional statements in the original code have been replaced with a case statement in the refactored code. The case statement uses the return value of the `submission_status`` method to determine the appropriate action.

- The logic for determining the submission status has been extracted into a separate method called `submission_status`. This method encapsulates the logic for checking if a submission was made within the round, if a link was submitted, and if the link format is invalid.

- The code for retrieving the submitted hyperlink has been moved into a separate method called `submission_link`. This method is called when needed instead of being inlined in the check_submission_state method.

- The condition for checking the link format validity has been moved into a separate method called `invalid_link_format?`. This method is called within the `submission_status` method to determine if the link format is invalid.

Refactor the `sort_reviewer_by_review_volume_desc` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/2c853789b1dbf20c97907effbd9ac7593da1c4a7)

Refactor the `get_team_color` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/5f80916bb97cba9e54771c6799d6a71c854c8dda)

Refactor the `check_submission_state` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/024533131a4ba11719449419a768122bbddbe497)

Refactor the `isplay_tagging_interval_chart` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/bc83786f46e1f2e734563e9db82d820d6e086b12)

Test Plan

We intend to expand Project 4's test coverage by introducing more tests.

Throughout our Test-Driven Development (TDD) efforts, our group would systematically use the painstakingly constructed test skeletons of Vyshnavi Adusumelli and Mustafa Olmez. These painstakingly crafted skeletons will function as priceless blueprints, outlining the fundamental tests required for our unit. We thoroughly test our controller module by including these painstakingly created test skeletons into our workflow. With this approach, we would be able to investigate the workings of our controller in great detail, making sure that its actions are carefully reviewed and verified. Adding these test skeletons will improve our codebase's general quality and reliability while also providing a solid foundation for our unit tests.

Test Plan

1. Test `review_metrics` method:

   - Test case: when given a round and team_id
- Test case: sets max, min, and avg to '-----' as default values
- Test case: sets max, min, and avg to the corresponding values from avg_and_ranges if present

2. Test `check_submission_state` method:

  - Test case: when the submission is within the round
- Test case: when the submission is not within the round
- Test case: when the link is not provided or does not start with 'https://wiki'
- Test case: when the link is provided and starts with 'https://wiki'
- Test case: when the link has been updated since the last round
- Test case: when the link has not been updated since the last round

3. Test `display_volume_metric_chart` method:

   - Test case: when given a reviewer
- Test case: initializes chart elements
- Test case: creates the data for the volume metric chart
- Test case: creates the options for the volume metric chart
- Test case: displays the volume metric chart

4. Test `sort_reviewer_by_review_volume_desc` method:

   - Test case: when there are reviewers and review volumes available
- Test case: calculates the volume of review comments for each reviewer
- Test case: sets the overall average volume of review comments for each reviewer
- Test case: sets the average volume of review comments per round for each reviewer
- Test case: sorts the reviewers by their review volume in descending order
- Test case: gets the number of review rounds for the assignment
- Test case: sets the average volume of review comments per round for all reviewers

5. Test `get_certain_review_and_feedback_response_map` method:

   - Test case: when author has feedback response maps
- Test case: when author does not have feedback response maps
- Test case: when review response maps exist for the given reviewed object and reviewee
- Test case: when review response maps do not exist for the given reviewed object and reviewee
- Test case: when review responses exist for the given review response map ids
- Test case: when review responses do not exist for the given review response map ids
- Test case: when review responses exist
- Test case: when review responses do not exist

Design Pattern

During the code refactoring process, various design patterns were leveraged to enhance readability and maintainability. The commonly applied patterns include:

 1. Extract Method: Identifying lengthy and intricate methods and extracting segments of functionality into separate methods. 
2. Refactoring Conditionals: Encapsulating the logic within conditional statements into distinct methods to streamline the code's flow.

These design patterns helped in making the code more comprehensible and easier to maintain.

Relevant Links

Team

Mentor

  • Ananya Mantravadi (amantra)

Team Members

  • Sahil Changlani (schangl)
  • Rushil Vegada (rvegada)
  • Romil Shah (rmshah3)