CSC/ECE 517 Spring 2024 - E2405 Refactor review mapping helper.rb: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
(Created page with "This wiki page describes changes made under the E2405 OODD assignment for Spring 2024, CSC/ECE 517. __TOC__ == Expertiza Background== Expertiza is an open-source online application developed using Ruby on Rails framework. It is maintained by the staff and students at NC State University. This application provides instructors with comprehensive control over managing tasks and assignments in their courses. Expertiza offers a wide range of powerful features, including peer...")
 
No edit summary
 
(13 intermediate revisions by 2 users not shown)
Line 5: Line 5:


== About Helper ==
== About Helper ==
'''review_mapping_helper''' function is to map the reviewer to an assignment. In essence, the controller manages the distribution of reviews among various groups or individual student users, addressing situations such as peer and self-evaluations. Furthermore, the controller is essential in responding to student users' requests and enabling additional bonus reviews that comply with the assignment criteria.
The '''review_mapping_helper''' module in Ruby on Rails provides a set of helper methods to facilitate the peer review process in an assignment. It includes functionality for generating review reports, managing submission statuses, calculating review scores, and visualizing review metrics, but requires refactoring to improve code maintainability and readability.
 


== Functionality of review_mapping_controller ==
== Functionality of review_mapping_controller ==
The review_mapping_controller serves as a critical component within a system designed to manage the assignment mapping and allocation of reviewers for various types of assessments, such as peer reviews and self-assessments. Its primary function revolves around orchestrating the distribution of these reviews, whether they are assigned to multiple teams or to individual student users. This entails a sophisticated algorithmic process that takes into account factors such as fairness, diversity of perspectives, and adherence to assignment guidelines. By controlling the assignment of reviews, the controller ensures that each participant receives a balanced and constructive evaluation of their work, contributing to the overall integrity and effectiveness of the assessment process.
The review_mapping_helper.rb file is a Ruby module that contains various helper methods related to the review mapping process in a peer review system. Here's a brief overview of the main functionalities provided by this module:


Furthermore, the review_mapping_controller plays a pivotal role in handling student requests for additional bonus assessments. These requests may arise due to various reasons such as a desire for more feedback, a pursuit of extra credit, or a need for reassessment. In responding to such requests, the controller must maintain alignment with the established guidelines and constraints of the assignments. This involves evaluating the feasibility of accommodating extra assessments without compromising the integrity or fairness of the evaluation process. Additionally, the controller may need to consider resource constraints, such as the availability of reviewers and the workload distribution among them.
1. Generating review reports with data such as reviewer IDs, reviewed object IDs, and response types. <br/>
2. Determining team colors based on review status and assignment submission status. <br/>
3. Checking submission states within each round and assigning team colors accordingly. <br/>
4. Retrieving and displaying submitted hyperlinks and files for review.<br/>
5. Calculating awarded review scores and determining minimum, maximum, and average grade values.<br/>
6. Sorting reviewers based on the average volume of reviews in each round.<br/>
7. Generating and displaying charts for review volume metrics and tagging time intervals.<br/>
8. Retrieving and setting up review and feedback responses for feedback reports.<br/>
9. Determining CSS styles for the calibration report based on the difference between student and instructor answers.<br/>
10. Defining and calculating review strategies for students and teams, including reviews per team, reviews needed, and reviews per student.<br/>


== Problem Statement ==
The review_mapping_helper is challenging for developers to understand and utilize effectively due to its length, complexity, and lack of comments. The controller should go through a thorough restructuring process to divide complex procedures into smaller, easier-to-manage parts. The refactoring effort should also focus on:


== Problem Statement ==
1. Addressing cases of code duplication </br>
The review_mapping_controller presents a challenge due to its length, complexity, and sparse comments, making it difficult for developers to grasp its functionality efficiently. To address this, the controller should undergo a comprehensive refactoring process aimed at breaking down lengthy methods into smaller, more manageable pieces. This entails decomposing intricate logic into modular components, each responsible for a specific task or subtask within the overall functionality. Moreover, the refactoring effort should target instances of code duplication, consolidating repeated code segments into reusable functions or utility methods to enhance maintainability and reduce the risk of errors. By systematically restructuring the controller codebase and improving its documentation, developers can gain a clearer understanding of its inner workings, facilitating easier maintenance, debugging, and future enhancements.
2. Combining redundant code segments into reusable functions or utility methods </br>
3. Improving naming conventions for methods and variables </br>
4. Tackling necessary code changes for better readability and maintainability </br>


== Tasks ==
== Tasks ==
-Refactor the long methods in review_mapping_controller.rb like assign_reviewer_dynamically, add_reviewer, automatic_review_mapping, peer_review_strategy, etc.
- Refactor the file to reduce the overall lines of code to be within the allowed limit of 250 lines.<br/>
-Rename variable names to convey what they are used for.<br/>
- Refactor the <code>`display_volume_metric_chart`</code> method to reduce its lines of code to be within the allowed limit of 25 lines.<br/>
-Replace switch statements with subclasses methods.<br/>
- Refactor the <code>`display_tagging_interval_chart`</code> method to reduce its lines of code to be within the allowed limit of 25 lines.<br/>
-Create models for the subclasses.<br/>
- Refactor the <code>`check_submission_state`</code> method to reduce its Cognitive Complexity to be within the allowed limit of 5 and reduce the number of arguments to be within the allowed limit of 4.<br/>
-Remove hardcoded parameters.<br/>
- Refactor the <code>`sort_reviewer_by_review_volume_desc`</code> method to reduce its Cognitive Complexity to be within the allowed limit of 5.<br/>
-Add meaningful comments and edit/remove/do not unnecessary comments.<br/>
- Refactor the <code>`review_metrics`</code> method to reduce its Cognitive Complexity to be within the allowed limit of 5.<br/>
-Try to increase the test coverage.<br/>
- Refactor the <code>`get_team_color`</code> method to reduce its Cognitive Complexity to be within the allowed limit of 5.<br/>
- Reduce the number of arguments for the <code>`check_submission_state`</code> method from 5 to 4. <br/>


=== Phase 1 ===
=== Phase 1 ===
For Phase 1 of the project we have focused working on the below mentioned issues.<br/>
For Phase 1 of the project we have focused working on the below mentioned issues: <br/>
-Refactor assign_reviewer_dynamically function<br/>
- Refactor the <code>`display_volume_metric_chart`</code> method.<br/>
-Corresponding changes to the tests for assign_reviewer_dynamically<br/>
- Refactor the <code>`review_metrics`</code> method.<br/>
-Refactor add_reviewer function<br/>
- Commented the code. <br/>
-Corresponding changes to the tests for add_reviewer<br/>
- Fixed Code climate issues. <br/>
-Correct comments and add additional comments<br/>
-Methods are named descriptively to indicate their purpose<br/>
-Fixed Code climate issues<br/>


=== Phase 2 ===
=== Phase 2 ===
For Phase 2 of the project we plan working on the below mentioned issues.<br/>
For Phase 2 of the project we plan working on the below mentioned issues: <br/>
-Refactor automatic_review_mapping function<br/>
- Refactor the <code>`display_tagging_interval_chart`</code> method.<br/>
-Refactor peer_review_strategy function<br/>
- Refactor the <code>`check_submission_state`</code> method.<br/>
-Replace switch statements with subclasses methods<br/>
- Refactor the <code>`sort_reviewer_by_review_volume_desc`</code> method.<br/>
-Increase the test coverage<br/>
- Refactor the <code>`get_team_color`</code> method.<br/>
-Remove hardcoded parameters<br/>
- Reduce the number of arguments for the <code>`check_submission_state`</code> method. <br/>
-Create models for the subclasses<br/>
- Increase the test coverage. <br/>
- Increase code readability. <br/>


== Implementation ==
== Implementation ==
Line 49: Line 60:
=== Phase 1 ===
=== Phase 1 ===


==== add_reviewer ====
==== Refactor the <code>`display_volume_metric_chart`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/8980cb03531543563a9ac023c6a5a801e3ecc709) ====
The changes made to the `add_reviewer` method can be checked in the commit - https://github.com/NidhayPancholi/expertiza/commit/b325810d67da2a03d3ccb15458926d0049fdb9eb. The changes are described descriptively below.:<br/>
The <code>`display_volume_metric_chart`</code> method has been modified to address the issue of reducing its length to 25 lines. The changes made are as follows:


- Refactored the `add_reviewer` method to focus on single tasks per method, enhancing code readability and maintainability.<br/>
- The method now focuses solely on preparing the data and options for the chart and rendering the chart using the <code>`bar_chart`</code> method. <br/>
- Extracted the functionality to find a user's ID by name into a separate method named `find_user_id_by_name`.<br/>
- The logic for preparing the chart data has been extracted into a separate method called <code>`prepare_chart_data`</code>. This method takes the labels, reviewer_data, and all_reviewers_data as parameters and returns a hash containing the formatted data required for the chart.<br/>
- Separated the logic to check if a user is trying to review their own artifact into its own method named `user_trying_to_review_own_artifact?`.<br/>
- Similarly, the logic for preparing the chart options has been moved to a separate method called <code>`prepare_chart_options`</code>. This method returns a hash containing the configuration options for the chart, such as legend settings, width, height, and axis properties.<br/>
- Abstracted the process of assigning a reviewer to the assignment into a method named `assign_reviewer`.<br/>
- By extracting the data preparation and options configuration into separate methods, the <code>`display_volume_metric_chart`</code> method becomes more concise and focused on its main responsibility of displaying the chart.<br/>
- Created a method named `registration_url` to generate the registration URL for the assignment based on provided parameters.<br/>
- The <code>`prepare_chart_data`</code> method constructs the hash structure required by the charting library, including the labels and datasets. It sets the label, background color, border width, data, and yAxisID for each dataset.<br/>
- Divided the code to create a review response map into a separate method named `create_review_response_map`.<br/>
- The <code>`prepare_chart_options`</code> method defines the options for the chart, such as the legend position and style, chart dimensions, and axis configurations. It specifies the stacking, thickness, and other properties for the y-axes and x-axis.<br/>
- Extracted the logic to redirect to the list mappings page after adding the reviewer into its own method named `redirect_to_list_mappings`.<br/>
- Added descriptive comments to each method to explain its purpose and functionality clearly.<br/>


==== assign_reviewer_dynamically ====
==== Refactor the <code>`review_metrics`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/6afd790fdc8765a2e60641b7edff388692fd069f) ====
The changes made to the `assign_reviewer_dynamically` method can be checked in the commit - https://github.com/NidhayPancholi/expertiza/commit/30a3625d4188e56a58e4b6472c52b60bbfb83df5. The changes are described descriptively below:<br/>
Several changes have been made to reduce the cyclomatic complexity of the review_metrics method from 6 to 5. Let's go through the changes:


- Restructured the `assign_reviewer_dynamically` method to perform single tasks per method, improving code organization and readability.<br/>
- The array of metrics %i[max min avg] is assigned to a variable metrics at the beginning of the method for better readability and reusability.<br/>
- Extracted the functionality to find the assignment participant into a separate method called `find_participant_for_assignment`.<br/>
- The code for initializing the metrics with the default value '-----' has been extracted into a separate private method called <code>`initialize_metrics`</code>. This method iterates over the metrics array and sets the corresponding instance variables using string interpolation.<br/>
- Abstracted the logic to handle errors when no topic is selected into a method named `topic_selection_error?`.<br/>
- The condition for checking if the team data is available has been moved to a separate private method called <code>`team_data_available?`</code>. This method takes team_id, round, and metrics as parameters and returns a boolean value indicating whether the data is available for all metrics in the given round for the specified team.<br/>
- Created a method named `dynamically_assign_reviewer` to handle the process of dynamically assigning a reviewer based on the assignment type.<br/>
- The code for updating the metrics based on the available data has been moved to a separate private method called <code>`update_metrics`</code>. This method iterates over the metrics array and updates the corresponding instance variables with the metric values fetched from the @avg_and_ranges hash.<br/>
- Separated the logic to assign a reviewer when the assignment has topics into a method named `assign_reviewer_with_topic`.<br/>
- The logic for fetching the metric value has been extracted into a separate private method called <code>`fetch_metric_value`</code>. This method takes team_id, round, and metric as parameters and returns the formatted metric value. If the value is nil, it returns '-----'; otherwise, it rounds the value to 0 decimal places and appends a '%' symbol.<br/>
- Developed a method called `select_topic_to_review` to handle the selection of a topic for review.<br/>
- The <code>`review_metrics`</code> method now has a more linear flow. It initializes the metrics using <code>`initialize_metrics`</code>, checks if the team data is available using <code>`team_data_available?`</code>, and updates the metrics using <code>`update_metrics`</code> if the data is available. If the data is not available, the method returns early without updating the metrics.<br/>
- Extracted the logic to assign a reviewer when the assignment has no topics into a method named `assign_reviewer_without_topic`.<br/>
- Created a method named `select_assignment_team_to_review` to handle the selection of an assignment team for review.<br/>
- Abstracted the process to redirect to the student review list page into a method called `redirect_to_student_review_list`.<br/>
- Added clear comments to each method to explain its purpose and functionality effectively.<br/>


==== Changes to the spec file ====
=== Phase 2 ===
The changes made to the test files are described below and can be found in the commit - https://github.com/expertiza/expertiza/commit/7c08070f0c2c000e64e55561b882e44fc81bc98f:<br/>


- Updated the `ReviewMappingController` spec file.<br/>
==== Refactor the <code>`check_submission_state`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/c53a1b0db0c959f02d5fdf56d7f8490f73598333) ====
- Added a test case in the `ReviewMappingController` spec file for the `add_reviewer` method to ensure correct behavior when a team user exists and `get_reviewer` method returns a reviewer.<br/>
- Adjusted the expectation in the `assign_reviewer_dynamically` test case to match the corrected error message in the controller. Specifically, removed the extra space from the expected error message to align with the actual error message generated by the controller.<br/>
- Ensured that all test cases are descriptive and cover the relevant scenarios for each method.<br/>
- Verified that the test cases accurately reflect the behavior of the controller methods after the code changes.<br/>


== Test Plan ==
The main changes made to reduce the cyclomatic complexity of the <code>`check_submission_state`</code> method are:
We plan on adding more tests, increasing the test coverage in Project 4.
 
- The if-else conditional statements in the original code have been replaced with a case statement in the refactored code. The case statement uses the return value of the <code>`submission_status``</code> method to determine the appropriate action. </br>
 
- The logic for determining the submission status has been extracted into a separate method called <code>`submission_status`</code>. This method encapsulates the logic for checking if a submission was made within the round, if a link was submitted, and if the link format is invalid. </br>
 
- The code for retrieving the submitted hyperlink has been moved into a separate method called <code>`submission_link`</code>. This method is called when needed instead of being inlined in the check_submission_state method. </br>
 
- The condition for checking the link format validity has been moved into a separate method called <code>`invalid_link_format?`</code>. This method is called within the <code>`submission_status`</code> method to determine if the link format is invalid. </br>
 
==== Refactor the <code>`sort_reviewer_by_review_volume_desc`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/2c853789b1dbf20c97907effbd9ac7593da1c4a7) ====


In our Test-Driven Development (TDD) endeavors, our team would strategically adopt Mustafa Olmez's meticulously crafted test skeletons. These meticulously designed skeletons will serve as invaluable blueprints, delineating the essential tests necessary for our unit. By integrating these meticulously designed test skeletons into our workflow, we conduct comprehensive testing of our controller module. This method would enable us to thoroughly explore the functionality of our controller, ensuring meticulous examination and validation of its behavior. Incorporating these test skeletons will not only establish a sturdy framework for our unit tests but also elevate the overall quality and dependability of our codebase.
In the refactored code, several changes were made to reduce the cognitive complexity of the sort_reviewer_by_review_volume_desc method: </br>


'''Test Plan'''
The method has been split into smaller, more focused methods: </br>


Our Test Plan includes test for <code>review_mapping_controller.rb</code> file for the following functions:<br/>
- <code>`calculate_review_volumes`</code>: Calculates the review volumes for each reviewer. </br>


1. Test <code>`action_allowed?`</code> method:<br/>
- <code>`calculate_overall_averages`</code>: Calculates the overall average review volume across all reviewers. </br>
  - Test when the action is <code>'add_dynamic_reviewer'</code>.<br/>
  - Test when the action is <code>'show_available_submissions'</code>.<br/>
  - Test when the action is <code>'assign_reviewer_dynamically'</code>.<br/>
  - Test when the action is <code>'assign_metareviewer_dynamically'</code>.<br/>
  - Test when the action is <code>'assign_quiz_dynamically'</code>.<br/>
  - Test when the action is <code>'start_self_review'</code>.<br/>
  - Test when the action is not any of the allowed actions for different roles.<br/>


2. Test <code>`add_calibration`</code> method:<br/>
- <code>`calculate_round_averages`</code>: Calculates the average review volume for each round across all reviewers. </br>
  - Test when the participant is already assigned.<br/>
  - Test when the participant is not assigned.<br/>
  - Test when a calibration map already exists.<br/>
  - Test when a calibration map does not exist.<br/>
  - Test redirection to the response creation page.<br/>


3. Test <code>`select_reviewer`</code> method:<br/>
- <code>`sort_reviewers_by_overall_average`</code>: Sorts the reviewers in descending order based on their overall average review volume. </br>
  - Test when called with a valid contributor_id.<br/>
  - Test when called with an invalid contributor_id.<br/>


4. Test <code>`select_metareviewer`</code> method:<br/>
==== Refactor the <code>`get_team_color`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/5f80916bb97cba9e54771c6799d6a71c854c8dda) ====
  - Test when given a valid response map id.<br/>
  - Test when given an invalid response map id.<br/>


5. Test <code>`add_reviewer`</code> method:<br/>
  - Test when the reviewer is not assigned to review their own artifact.<br/>
  - Test when the reviewer is assigned to review their own artifact.<br/>
  - Test when the reviewer is already assigned to the contributor.<br/>


6. Test <code>`assign_reviewer_dynamically`</code> method:<br/>
==== Refactor the <code>`check_submission_state`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/024533131a4ba11719449419a768122bbddbe497) ====
  - Test when a topic is selected and review is allowed.<br/>
  - Test when no topic is selected and review is allowed.<br/>
  - Test when no topic is selected and review is not allowed.<br/>
  - Test when a topic is selected and review is not allowed.<br/>
  - Test when there are no available topics to review.<br/>
  - Test when there are no available artifacts to review.<br/>
  - Test when the reviewer has reached the maximum number of outstanding reviews.<br/>


7. Test <code>`review_allowed?`</code> method:<br/>
In the refactored code, the changes were made to limit the number of arguments passed to the check_submission_state method to a maximum of 4. This was achieved by grouping related arguments into a single hash argument.
  - Test when the reviewer has not reached the maximum number of reviews allowed for the assignment.<br/>
  - Test when the reviewer has reached the maximum number of reviews allowed for the assignment.<br/>


8. Test <code>`check_outstanding_reviews?`</code> method:<br/>
Here are the specific changes:
  - Test when there are no review mappings for the assignment and reviewer.<br/>
  - Test when there are review mappings for the assignment and reviewer, and all reviews are completed.<br/>
  - Test when there are review mappings for the assignment and reviewer, and some reviews are in progress.<br/>


9. Test <code>`assign_quiz_dynamically`</code> method:<br/>
- In the <code>`obtain_team_color`</code> method, instead of passing round and color as separate arguments to <code>`check_submission_state`</code>, a hash round_info is created with keys :round and :color. This hash is then passed as a single argument to <code>`check_submission_state`</code>. </br>
  - Test when the reviewer has already taken the quiz.<br/>
  - Test when the reviewer has not taken the quiz yet.<br/>
  - Test when an error occurs during the assignment process.<br/>


10. Test <code>`add_metareviewer`</code> method:<br/>
- In the <code>`check_submission_state`</code> method, the method signature has been updated to accept the round_info hash as a single argument instead of separate round and color arguments. </br>
    - Test when a metareviewer is successfully added.<br/>
    - Test when the metareviewer is already assigned to the reviewer.<br/>
    - Test when an error occurs during the process.<br/>


11. Test <code>`assign_metareviewer_dynamically`</code> method:<br/>
- Inside the <code>`check_submission_state`</code> method, the round and color variables are extracted from the round_info hash using the <code>`values_at`</code> method. This allows the method to access the values of :round and :color from the hash. </br>
    - Test when there are reviews to Meta review.<br/>
    - Test when there are no reviews to Meta review.<br/>
    - Test when an error occurs during assignment of metareviewer.<br/>


12. Test <code>`get_reviewer`</code> method:<br/>
==== Refactor the <code>`isplay_tagging_interval_chart`</code> method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/bc83786f46e1f2e734563e9db82d820d6e086b12) ====
    - Test when the user is a participant in the assignment.<br/>
    - Test when the user is not a participant in the assignment.<br/>


13. Test <code>`delete_outstanding_reviewers`</code> method:<br/>
The refactored code makes several changes to simplify and improve the readability of the display_tagging_interval_chart method. Here are the main changes: </br>
    - Test when there are outstanding reviewers.<br/>
    - Test when there are no outstanding reviewers.<br/>


14. Test <code>`delete_all_metareviewers`</code> method:<br/>
- The unless block that checks if intervals is empty has been removed. Instead, the method now uses an early return statement to exit the method if intervals is empty. This simplifies the code and reduces nesting. </br>
    - Test when there are metareview mappings to delete.<br/>
    - Test when there are unsuccessful deletes.<br/>
    - Test when there are no metareview mappings to delete.<br/>


15. Test <code>`unsubmit_review`</code> method:<br/>
-The interval mean calculation has been simplified using the sum method instead of reduce(:+). This makes the code more concise and easier to understand. </br>
    - Test when the response is successfully unsubmitted.<br/>
    - Test when the response fails to be unsubmitted.<br/>


16. Test <code>`delete_reviewer`</code> method:<br/>
- The labels array in the data hash is now created using (1..intervals.length).to_a instead of [*1..intervals.length]. This achieves the same result but uses a more idiomatic Ruby syntax. </br>
    - Test when the review response map exists and there are no associated responses.<br/>
    - Test when the review response map exists but there are associated responses.<br/>
    - Test when the review response map does not exist.<br/>


17. Test <code>`delete_metareviewer`</code> method:<br/>
- The conditional block that checks if intervals is empty within the datasets array has been removed. The refactored code always includes the "Mean time spent" dataset, even if intervals is empty. This simplifies the code and avoids the need for conditional logic within the datasets array. </br>
    - Test when the metareview mapping exists.<br/>
    - Test when the metareview mapping does not exist.<br/>


19. Test <code>`list_mappings`</code> method.<br/>
- The options hash has been reformatted to improve readability. The key-value pairs are now aligned vertically, making it easier to understand the structure of the options. </br>


20. Test <code>`automatic_review_mapping`</code> method.<br/>
- The line_chart method call has been moved outside the method definition to make it clear that it is a separate step in the chart generation process. </br>


21. Test <code>`automatic_review_mapping_strategy`</code> method.<br/>
== Test Plan ==
We intend to expand Project 4's test coverage by introducing more tests.


22. Test <code>`automatic_review_mapping_staggered`</code> method.<br/>
Throughout our Test-Driven Development (TDD) efforts, our group would systematically use the painstakingly constructed test skeletons of Vyshnavi Adusumelli and Mustafa Olmez. These painstakingly crafted skeletons will function as priceless blueprints, outlining the fundamental tests required for our unit. We thoroughly test our controller module by including these painstakingly created test skeletons into our workflow. With this approach, we would be able to investigate the workings of our controller in great detail, making sure that its actions are carefully reviewed and verified. Adding these test skeletons will improve our codebase's general quality and reliability while also providing a solid foundation for our unit tests.


23. Test <code>`save_grade_and_comment_for_reviewer`</code> method.<br/>
'''Test Plan'''


24. Test <code>`start_self_review`</code> method.<br/>
1. Test <code>`review_metrics`</code> method:<br/>
    - Test case: when given a round and team_id<br/>
      - Test case: sets max, min, and avg to '-----' as default values<br/>
      - Test case: sets max, min, and avg to the corresponding values from avg_and_ranges if present<br/>


25. Test <code>`assign_reviewers_for_team`</code> method.<br/>
2. Test <code>`check_submission_state`</code> method:<br/>
  - Test case: when the submission is within the round<br/>
  - Test case: when the submission is not within the round<br/>
    - Test case: when the link is not provided or does not start with 'https://wiki'<br/>
    - Test case: when the link is provided and starts with 'https://wiki'<br/>
      - Test case: when the link has been updated since the last round<br/>
      - Test case: when the link has not been updated since the last round<br/>


26. Test <code>`peer_review_strategy`</code> method.<br/>
3. Test <code>`display_volume_metric_chart`</code> method:<br/>
    - Test case: when given a reviewer<br/>
      - Test case: initializes chart elements<br/>
      - Test case: creates the data for the volume metric chart<br/>
      - Test case: creates the options for the volume metric chart<br/>
      - Test case: displays the volume metric chart<br/>
4. Test <code>`sort_reviewer_by_review_volume_desc`</code> method:<br/>
    - Test case: when there are reviewers and review volumes available<br/>
      - Test case: calculates the volume of review comments for each reviewer<br/>
      - Test case: sets the overall average volume of review comments for each reviewer<br/>
      - Test case: sets the average volume of review comments per round for each reviewer<br/>
      - Test case: sorts the reviewers by their review volume in descending order<br/>
      - Test case: gets the number of review rounds for the assignment<br/>
      - Test case: sets the average volume of review comments per round for all reviewers<br/>


27. Test <code>`review_mapping_params`</code> method.<br/>
5. Test <code>`get_certain_review_and_feedback_response_map`</code> method:<br/>
    - Test case: when author has feedback response maps<br/>
    - Test case: when author does not have feedback response maps<br/>
    - Test case: when review response maps exist for the given reviewed object and reviewee<br/>
    - Test case: when review response maps do not exist for the given reviewed object and reviewee<br/>
    - Test case: when review responses exist for the given review response map ids<br/>
    - Test case: when review responses do not exist for the given review response map ids<br/>
    - Test case: when review responses exist<br/>
    - Test case: when review responses do not exist<br/>


== Design Pattern ==
== Design Pattern ==
During our code refactoring process, we leveraged various design patterns to enhance readability and maintainability. One commonly applied pattern was "Extract Method," where we identified lengthy and intricate methods and extracted segments of functionality into separate methods. This restructuring made the code more comprehensible and easier to grasp by isolating specific tasks within dedicated methods.
During the code refactoring process, various design patterns were leveraged to enhance readability and maintainability. The commonly applied patterns include:


Additionally, we addressed the issue of excessive conditional statements by employing the "Refactoring Conditionals" design pattern. Instead of cluttering the code with numerous conditionals, we refactored by encapsulating the logic within these conditionals into distinct methods. By doing so, we streamlined the code's flow and improved its readability, making it clearer to understand the purpose and execution of each segment.
  1. Extract Method: Identifying lengthy and intricate methods and extracting segments of functionality into separate methods. </br>
  2. Refactoring Conditionals: Encapsulating the logic within conditional statements into distinct methods to streamline the code's flow. </br>


These design patterns helped in making the code more comprehensible and easier to maintain.


== Relevant Links ==
== Relevant Links ==
* '''Github Repository:''' https://github.com/sahilchanglani/expertiza
* '''Github Repository:''' https://github.com/sahilchanglani/expertiza
* '''Pull Request:''' https://github.com/expertiza/expertiza/pull/2764
* '''Pull Request:''' https://github.com/expertiza/expertiza/pull/2764
* '''Youtube Video:''' https://youtu.be/xyase3nuYxc


== Team ==
== Team ==

Latest revision as of 03:48, 24 April 2024

This wiki page describes changes made under the E2405 OODD assignment for Spring 2024, CSC/ECE 517.

Expertiza Background

Expertiza is an open-source online application developed using Ruby on Rails framework. It is maintained by the staff and students at NC State University. This application provides instructors with comprehensive control over managing tasks and assignments in their courses. Expertiza offers a wide range of powerful features, including peer review management, group formation, and subject addition capabilities. It is a versatile platform that can handle various types of assignments. For more detailed information about the extensive features offered by Expertiza, users can refer to the Expertiza wiki.

About Helper

The review_mapping_helper module in Ruby on Rails provides a set of helper methods to facilitate the peer review process in an assignment. It includes functionality for generating review reports, managing submission statuses, calculating review scores, and visualizing review metrics, but requires refactoring to improve code maintainability and readability.

Functionality of review_mapping_controller

The review_mapping_helper.rb file is a Ruby module that contains various helper methods related to the review mapping process in a peer review system. Here's a brief overview of the main functionalities provided by this module:

1. Generating review reports with data such as reviewer IDs, reviewed object IDs, and response types.
2. Determining team colors based on review status and assignment submission status.
3. Checking submission states within each round and assigning team colors accordingly.
4. Retrieving and displaying submitted hyperlinks and files for review.
5. Calculating awarded review scores and determining minimum, maximum, and average grade values.
6. Sorting reviewers based on the average volume of reviews in each round.
7. Generating and displaying charts for review volume metrics and tagging time intervals.
8. Retrieving and setting up review and feedback responses for feedback reports.
9. Determining CSS styles for the calibration report based on the difference between student and instructor answers.
10. Defining and calculating review strategies for students and teams, including reviews per team, reviews needed, and reviews per student.

Problem Statement

The review_mapping_helper is challenging for developers to understand and utilize effectively due to its length, complexity, and lack of comments. The controller should go through a thorough restructuring process to divide complex procedures into smaller, easier-to-manage parts. The refactoring effort should also focus on:

1. Addressing cases of code duplication
2. Combining redundant code segments into reusable functions or utility methods
3. Improving naming conventions for methods and variables
4. Tackling necessary code changes for better readability and maintainability

Tasks

- Refactor the file to reduce the overall lines of code to be within the allowed limit of 250 lines.
- Refactor the `display_volume_metric_chart` method to reduce its lines of code to be within the allowed limit of 25 lines.
- Refactor the `display_tagging_interval_chart` method to reduce its lines of code to be within the allowed limit of 25 lines.
- Refactor the `check_submission_state` method to reduce its Cognitive Complexity to be within the allowed limit of 5 and reduce the number of arguments to be within the allowed limit of 4.
- Refactor the `sort_reviewer_by_review_volume_desc` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Refactor the `review_metrics` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Refactor the `get_team_color` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Reduce the number of arguments for the `check_submission_state` method from 5 to 4.

Phase 1

For Phase 1 of the project we have focused working on the below mentioned issues:
- Refactor the `display_volume_metric_chart` method.
- Refactor the `review_metrics` method.
- Commented the code.
- Fixed Code climate issues.

Phase 2

For Phase 2 of the project we plan working on the below mentioned issues:
- Refactor the `display_tagging_interval_chart` method.
- Refactor the `check_submission_state` method.
- Refactor the `sort_reviewer_by_review_volume_desc` method.
- Refactor the `get_team_color` method.
- Reduce the number of arguments for the `check_submission_state` method.
- Increase the test coverage.
- Increase code readability.

Implementation

Phase 1

Refactor the `display_volume_metric_chart` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/8980cb03531543563a9ac023c6a5a801e3ecc709)

The `display_volume_metric_chart` method has been modified to address the issue of reducing its length to 25 lines. The changes made are as follows:

- The method now focuses solely on preparing the data and options for the chart and rendering the chart using the `bar_chart` method.
- The logic for preparing the chart data has been extracted into a separate method called `prepare_chart_data`. This method takes the labels, reviewer_data, and all_reviewers_data as parameters and returns a hash containing the formatted data required for the chart.
- Similarly, the logic for preparing the chart options has been moved to a separate method called `prepare_chart_options`. This method returns a hash containing the configuration options for the chart, such as legend settings, width, height, and axis properties.
- By extracting the data preparation and options configuration into separate methods, the `display_volume_metric_chart` method becomes more concise and focused on its main responsibility of displaying the chart.
- The `prepare_chart_data` method constructs the hash structure required by the charting library, including the labels and datasets. It sets the label, background color, border width, data, and yAxisID for each dataset.
- The `prepare_chart_options` method defines the options for the chart, such as the legend position and style, chart dimensions, and axis configurations. It specifies the stacking, thickness, and other properties for the y-axes and x-axis.

Refactor the `review_metrics` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/6afd790fdc8765a2e60641b7edff388692fd069f)

Several changes have been made to reduce the cyclomatic complexity of the review_metrics method from 6 to 5. Let's go through the changes:

- The array of metrics %i[max min avg] is assigned to a variable metrics at the beginning of the method for better readability and reusability.
- The code for initializing the metrics with the default value '-----' has been extracted into a separate private method called `initialize_metrics`. This method iterates over the metrics array and sets the corresponding instance variables using string interpolation.
- The condition for checking if the team data is available has been moved to a separate private method called `team_data_available?`. This method takes team_id, round, and metrics as parameters and returns a boolean value indicating whether the data is available for all metrics in the given round for the specified team.
- The code for updating the metrics based on the available data has been moved to a separate private method called `update_metrics`. This method iterates over the metrics array and updates the corresponding instance variables with the metric values fetched from the @avg_and_ranges hash.
- The logic for fetching the metric value has been extracted into a separate private method called `fetch_metric_value`. This method takes team_id, round, and metric as parameters and returns the formatted metric value. If the value is nil, it returns '-----'; otherwise, it rounds the value to 0 decimal places and appends a '%' symbol.
- The `review_metrics` method now has a more linear flow. It initializes the metrics using `initialize_metrics`, checks if the team data is available using `team_data_available?`, and updates the metrics using `update_metrics` if the data is available. If the data is not available, the method returns early without updating the metrics.

Phase 2

Refactor the `check_submission_state` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/c53a1b0db0c959f02d5fdf56d7f8490f73598333)

The main changes made to reduce the cyclomatic complexity of the `check_submission_state` method are:

- The if-else conditional statements in the original code have been replaced with a case statement in the refactored code. The case statement uses the return value of the `submission_status`` method to determine the appropriate action.

- The logic for determining the submission status has been extracted into a separate method called `submission_status`. This method encapsulates the logic for checking if a submission was made within the round, if a link was submitted, and if the link format is invalid.

- The code for retrieving the submitted hyperlink has been moved into a separate method called `submission_link`. This method is called when needed instead of being inlined in the check_submission_state method.

- The condition for checking the link format validity has been moved into a separate method called `invalid_link_format?`. This method is called within the `submission_status` method to determine if the link format is invalid.

Refactor the `sort_reviewer_by_review_volume_desc` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/2c853789b1dbf20c97907effbd9ac7593da1c4a7)

In the refactored code, several changes were made to reduce the cognitive complexity of the sort_reviewer_by_review_volume_desc method:

The method has been split into smaller, more focused methods:

- `calculate_review_volumes`: Calculates the review volumes for each reviewer.

- `calculate_overall_averages`: Calculates the overall average review volume across all reviewers.

- `calculate_round_averages`: Calculates the average review volume for each round across all reviewers.

- `sort_reviewers_by_overall_average`: Sorts the reviewers in descending order based on their overall average review volume.

Refactor the `get_team_color` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/5f80916bb97cba9e54771c6799d6a71c854c8dda)

Refactor the `check_submission_state` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/024533131a4ba11719449419a768122bbddbe497)

In the refactored code, the changes were made to limit the number of arguments passed to the check_submission_state method to a maximum of 4. This was achieved by grouping related arguments into a single hash argument.

Here are the specific changes:

- In the `obtain_team_color` method, instead of passing round and color as separate arguments to `check_submission_state`, a hash round_info is created with keys :round and :color. This hash is then passed as a single argument to `check_submission_state`.

- In the `check_submission_state` method, the method signature has been updated to accept the round_info hash as a single argument instead of separate round and color arguments.

- Inside the `check_submission_state` method, the round and color variables are extracted from the round_info hash using the `values_at` method. This allows the method to access the values of :round and :color from the hash.

Refactor the `isplay_tagging_interval_chart` method. (Commit Link: https://github.com/sahilchanglani/expertiza/commit/bc83786f46e1f2e734563e9db82d820d6e086b12)

The refactored code makes several changes to simplify and improve the readability of the display_tagging_interval_chart method. Here are the main changes:

- The unless block that checks if intervals is empty has been removed. Instead, the method now uses an early return statement to exit the method if intervals is empty. This simplifies the code and reduces nesting.

-The interval mean calculation has been simplified using the sum method instead of reduce(:+). This makes the code more concise and easier to understand.

- The labels array in the data hash is now created using (1..intervals.length).to_a instead of [*1..intervals.length]. This achieves the same result but uses a more idiomatic Ruby syntax.

- The conditional block that checks if intervals is empty within the datasets array has been removed. The refactored code always includes the "Mean time spent" dataset, even if intervals is empty. This simplifies the code and avoids the need for conditional logic within the datasets array.

- The options hash has been reformatted to improve readability. The key-value pairs are now aligned vertically, making it easier to understand the structure of the options.

- The line_chart method call has been moved outside the method definition to make it clear that it is a separate step in the chart generation process.

Test Plan

We intend to expand Project 4's test coverage by introducing more tests.

Throughout our Test-Driven Development (TDD) efforts, our group would systematically use the painstakingly constructed test skeletons of Vyshnavi Adusumelli and Mustafa Olmez. These painstakingly crafted skeletons will function as priceless blueprints, outlining the fundamental tests required for our unit. We thoroughly test our controller module by including these painstakingly created test skeletons into our workflow. With this approach, we would be able to investigate the workings of our controller in great detail, making sure that its actions are carefully reviewed and verified. Adding these test skeletons will improve our codebase's general quality and reliability while also providing a solid foundation for our unit tests.

Test Plan

1. Test `review_metrics` method:

   - Test case: when given a round and team_id
- Test case: sets max, min, and avg to '-----' as default values
- Test case: sets max, min, and avg to the corresponding values from avg_and_ranges if present

2. Test `check_submission_state` method:

  - Test case: when the submission is within the round
- Test case: when the submission is not within the round
- Test case: when the link is not provided or does not start with 'https://wiki'
- Test case: when the link is provided and starts with 'https://wiki'
- Test case: when the link has been updated since the last round
- Test case: when the link has not been updated since the last round

3. Test `display_volume_metric_chart` method:

   - Test case: when given a reviewer
- Test case: initializes chart elements
- Test case: creates the data for the volume metric chart
- Test case: creates the options for the volume metric chart
- Test case: displays the volume metric chart

4. Test `sort_reviewer_by_review_volume_desc` method:

   - Test case: when there are reviewers and review volumes available
- Test case: calculates the volume of review comments for each reviewer
- Test case: sets the overall average volume of review comments for each reviewer
- Test case: sets the average volume of review comments per round for each reviewer
- Test case: sorts the reviewers by their review volume in descending order
- Test case: gets the number of review rounds for the assignment
- Test case: sets the average volume of review comments per round for all reviewers

5. Test `get_certain_review_and_feedback_response_map` method:

   - Test case: when author has feedback response maps
- Test case: when author does not have feedback response maps
- Test case: when review response maps exist for the given reviewed object and reviewee
- Test case: when review response maps do not exist for the given reviewed object and reviewee
- Test case: when review responses exist for the given review response map ids
- Test case: when review responses do not exist for the given review response map ids
- Test case: when review responses exist
- Test case: when review responses do not exist

Design Pattern

During the code refactoring process, various design patterns were leveraged to enhance readability and maintainability. The commonly applied patterns include:

 1. Extract Method: Identifying lengthy and intricate methods and extracting segments of functionality into separate methods. 
2. Refactoring Conditionals: Encapsulating the logic within conditional statements into distinct methods to streamline the code's flow.

These design patterns helped in making the code more comprehensible and easier to maintain.

Relevant Links

Team

Mentor

  • Ananya Mantravadi (amantra)

Team Members

  • Sahil Changlani (schangl)
  • Rushil Vegada (rvegada)
  • Romil Shah (rmshah3)