CSC/ECE 517 Spring 2024 - E2405 Refactor review mapping helper.rb: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
(Created page with "This wiki page describes changes made under the E2405 OODD assignment for Spring 2024, CSC/ECE 517. __TOC__ == Expertiza Background== Expertiza is an open-source online application developed using Ruby on Rails framework. It is maintained by the staff and students at NC State University. This application provides instructors with comprehensive control over managing tasks and assignments in their courses. Expertiza offers a wide range of powerful features, including peer...")
 
No edit summary
Line 5: Line 5:


== About Helper ==
== About Helper ==
'''review_mapping_helper''' function is to map the reviewer to an assignment. In essence, the controller manages the distribution of reviews among various groups or individual student users, addressing situations such as peer and self-evaluations. Furthermore, the controller is essential in responding to student users' requests and enabling additional bonus reviews that comply with the assignment criteria.
The '''review_mapping_helper''' module in Ruby on Rails provides a set of helper methods to facilitate the peer review process in an assignment. It includes functionality for generating review reports, managing submission statuses, calculating review scores, and visualizing review metrics, but requires refactoring to improve code maintainability and readability.
 


== Functionality of review_mapping_controller ==
== Functionality of review_mapping_controller ==
The review_mapping_controller serves as a critical component within a system designed to manage the assignment mapping and allocation of reviewers for various types of assessments, such as peer reviews and self-assessments. Its primary function revolves around orchestrating the distribution of these reviews, whether they are assigned to multiple teams or to individual student users. This entails a sophisticated algorithmic process that takes into account factors such as fairness, diversity of perspectives, and adherence to assignment guidelines. By controlling the assignment of reviews, the controller ensures that each participant receives a balanced and constructive evaluation of their work, contributing to the overall integrity and effectiveness of the assessment process.
The review_mapping_helper.rb file is a Ruby module that contains various helper methods related to the review mapping process in a peer review system. Here's a brief overview of the main functionalities provided by this module:
 
Furthermore, the review_mapping_controller plays a pivotal role in handling student requests for additional bonus assessments. These requests may arise due to various reasons such as a desire for more feedback, a pursuit of extra credit, or a need for reassessment. In responding to such requests, the controller must maintain alignment with the established guidelines and constraints of the assignments. This involves evaluating the feasibility of accommodating extra assessments without compromising the integrity or fairness of the evaluation process. Additionally, the controller may need to consider resource constraints, such as the availability of reviewers and the workload distribution among them.


1. Generating review reports with data such as reviewer IDs, reviewed object IDs, and response types. <br/>
2. Determining team colors based on review status and assignment submission status. <br/>
3. Checking submission states within each round and assigning team colors accordingly. <br/>
4. Retrieving and displaying submitted hyperlinks and files for review.<br/>
5. Calculating awarded review scores and determining minimum, maximum, and average grade values.<br/>
6. Sorting reviewers based on the average volume of reviews in each round.<br/>
7. Generating and displaying charts for review volume metrics and tagging time intervals.<br/>
8. Retrieving and setting up review and feedback responses for feedback reports.<br/>
9. Determining CSS styles for the calibration report based on the difference between student and instructor answers.<br/>
10. Defining and calculating review strategies for students and teams, including reviews per team, reviews needed, and reviews per student.<br/>


== Problem Statement ==
== Problem Statement ==
The review_mapping_controller presents a challenge due to its length, complexity, and sparse comments, making it difficult for developers to grasp its functionality efficiently. To address this, the controller should undergo a comprehensive refactoring process aimed at breaking down lengthy methods into smaller, more manageable pieces. This entails decomposing intricate logic into modular components, each responsible for a specific task or subtask within the overall functionality. Moreover, the refactoring effort should target instances of code duplication, consolidating repeated code segments into reusable functions or utility methods to enhance maintainability and reduce the risk of errors. By systematically restructuring the controller codebase and improving its documentation, developers can gain a clearer understanding of its inner workings, facilitating easier maintenance, debugging, and future enhancements.
Because of its length, complexity, and lack of comments, the review_mapping_helper is challenging for developers to understand and utilize effectively. The controller should go through a thorough restructuring process to remedy this, with the goal of dividing complex procedures into smaller, easier-to-manage parts. To do this, complex logic must be broken down into modular parts, each of which is in charge of a particular task or subtask that contributes to the overall functioning. In order to improve maintainability and lower the chance of errors, the refactoring effort should also focus on cases of code duplication, combining redundant code segments into reusable functions or utility methods. Developers can better grasp the inner workings of the controller codebase and make maintenance, debugging, and future additions easier by methodically rearranging the codebase and increasing its documentation.


== Tasks ==
== Tasks ==
-Refactor the long methods in review_mapping_controller.rb like assign_reviewer_dynamically, add_reviewer, automatic_review_mapping, peer_review_strategy, etc.
- Refactor the file to reduce the overall lines of code to be within the allowed limit of 250 lines.<br/>
-Rename variable names to convey what they are used for.<br/>
- Refactor the <code>`display_volume_metric_chart`</code> method to reduce its lines of code to be within the allowed limit of 25 lines.<br/>
-Replace switch statements with subclasses methods.<br/>
- Refactor the <code>`display_tagging_interval_chart`</code> method to reduce its lines of code to be within the allowed limit of 25 lines.<br/>
-Create models for the subclasses.<br/>
- Refactor the <code>`check_submission_state`</code> method to reduce its Cognitive Complexity to be within the allowed limit of 5 and reduce the number of arguments to be within the allowed limit of 4.<br/>
-Remove hardcoded parameters.<br/>
- Refactor the <code>`sort_reviewer_by_review_volume_desc`</code> method to reduce its Cognitive Complexity to be within the allowed limit of 5.<br/>
-Add meaningful comments and edit/remove/do not unnecessary comments.<br/>
- Refactor the <code>`review_metrics`</code> method to reduce its Cognitive Complexity to be within the allowed limit of 5.<br/>
-Try to increase the test coverage.<br/>
- Refactor the <code>`get_team_color`</code> method to reduce its Cognitive Complexity to be within the allowed limit of 5.<br/>
- Reduce the number of arguments for the <code>`check_submission_state`</code> method from 5 to 4. <br/>


=== Phase 1 ===
=== Phase 1 ===
For Phase 1 of the project we have focused working on the below mentioned issues.<br/>
For Phase 1 of the project we have focused working on the below mentioned issues: <br/>
-Refactor assign_reviewer_dynamically function<br/>
- Refactor the <code>`display_volume_metric_chart`</code> method.<br/>
-Corresponding changes to the tests for assign_reviewer_dynamically<br/>
- Refactor the <code>`review_metrics`</code> method.<br/>
-Refactor add_reviewer function<br/>
- Commented the code. <br/>
-Corresponding changes to the tests for add_reviewer<br/>
- Fixed Code climate issues. <br/>
-Correct comments and add additional comments<br/>
-Methods are named descriptively to indicate their purpose<br/>
-Fixed Code climate issues<br/>


=== Phase 2 ===
=== Phase 2 ===
For Phase 2 of the project we plan working on the below mentioned issues.<br/>
For Phase 2 of the project we plan working on the below mentioned issues: <br/>
-Refactor automatic_review_mapping function<br/>
- Refactor the <code>`display_tagging_interval_chart`</code> method.<br/>
-Refactor peer_review_strategy function<br/>
- Refactor the <code>`check_submission_state`</code> method.<br/>
-Replace switch statements with subclasses methods<br/>
- Refactor the <code>`sort_reviewer_by_review_volume_desc`</code> method.<br/>
-Increase the test coverage<br/>
- Refactor the <code>`get_team_color`</code> method.<br/>
-Remove hardcoded parameters<br/>
- Reduce the number of arguments for the <code>`check_submission_state`</code> method. <br/>
-Create models for the subclasses<br/>
- Increase the test coverage. <br/>
- Increase code readability. <br/>


== Implementation ==
== Implementation ==
Line 85: Line 91:


== Test Plan ==
== Test Plan ==
We plan on adding more tests, increasing the test coverage in Project 4.
We intend to expand Project 4's test coverage by introducing more tests.


In our Test-Driven Development (TDD) endeavors, our team would strategically adopt Mustafa Olmez's meticulously crafted test skeletons. These meticulously designed skeletons will serve as invaluable blueprints, delineating the essential tests necessary for our unit. By integrating these meticulously designed test skeletons into our workflow, we conduct comprehensive testing of our controller module. This method would enable us to thoroughly explore the functionality of our controller, ensuring meticulous examination and validation of its behavior. Incorporating these test skeletons will not only establish a sturdy framework for our unit tests but also elevate the overall quality and dependability of our codebase.
Throughout our Test-Driven Development (TDD) efforts, our group would systematically use the painstakingly constructed test skeletons of Vyshnavi Adusumelli and Mustafa Olmez. These painstakingly crafted skeletons will function as priceless blueprints, outlining the fundamental tests required for our unit. We thoroughly test our controller module by including these painstakingly created test skeletons into our workflow. With this approach, we would be able to investigate the workings of our controller in great detail, making sure that its actions are carefully reviewed and verified. Adding these test skeletons will improve our codebase's general quality and reliability while also providing a solid foundation for our unit tests.


'''Test Plan'''
'''Test Plan'''


Our Test Plan includes test for <code>review_mapping_controller.rb</code> file for the following functions:<br/>
1. Test <code>`create_report_table_header`</code> method:<br/>
  - Test case: when headers are provided<br/>
  - Test case: when no headers are provided<br/>
 
2. Test <code>`review_report_data`</code> method:<br/>
  - Test case: when valid parameters are provided<br/>
    - Test scenario 1: when there are response maps for the given reviewed object, reviewer, and type<br/>
    - Test scenario 2: when there are no response maps for the given reviewed object, reviewer, and type<br/>
  - Test case: when invalid parameters are provided<br/>
    - Test scenario 1: when the reviewed object does not exist<br/>
    - Test scenario 2: when the reviewer does not exist<br/>
    - Test scenario 3: when the type is invalid<br/>
 
3. Test <code>`calculate_response_counts`</code> method:<br/>
  - Test case: when given an empty response_maps array<br/>
  - Test case: when given response_maps with no responses for any round<br/>
  - Test case: when given response_maps with responses for all rounds<br/>
  - Test case: when given response_maps with responses for some rounds<br/>
 
4. Test <code>`get_team_color`</code> method:<br/>
  - Test case: when a response exists for the response map<br/>
  - Test case: when no response exists for the response map<br/>
  - Test case: calls obtain_team_color method with the correct arguments<br/>
 
5. Test <code>`obtain_team_color`</code> method:<br/>
  - Test case: when there is only one round of review<br/>
  - Test case: when there are multiple rounds of review<br/>
  - Test case: when there are no rounds of review<br/>


1. Test <code>`action_allowed?`</code> method:<br/>
6. Test <code>`check_submission_state`</code> method:<br/>
   - Test when the action is <code>'add_dynamic_reviewer'</code>.<br/>
   - Test case: when the submission is within the round<br/>
   - Test when the action is <code>'show_available_submissions'</code>.<br/>
   - Test case: when the submission is not within the round<br/>
  - Test when the action is <code>'assign_reviewer_dynamically'</code>.<br/>
    - Test case: when the link is not provided or does not start with 'https://wiki'<br/>
  - Test when the action is <code>'assign_metareviewer_dynamically'</code>.<br/>
    - Test case: when the link is provided and starts with 'https://wiki'<br/>
  - Test when the action is <code>'assign_quiz_dynamically'</code>.<br/>
      - Test case: when the link has been updated since the last round<br/>
  - Test when the action is <code>'start_self_review'</code>.<br/>
      - Test case: when the link has not been updated since the last round<br/>
  - Test when the action is not any of the allowed actions for different roles.<br/>


2. Test <code>`add_calibration`</code> method:<br/>
7. Test <code>`response_for_each_round`?</code> method:<br/>
   - Test when the participant is already assigned.<br/>
   - Test case: when all rounds have a response<br/>
   - Test when the participant is not assigned.<br/>
   - Test case: when not all rounds have a response<br/>
  - Test when a calibration map already exists.<br/>
   - Test case: when no rounds have a response<br/>
   - Test when a calibration map does not exist.<br/>
  - Test redirection to the response creation page.<br/>


3. Test <code>`select_reviewer`</code> method:<br/>
8. Test <code>`submitted_within_round`?</code> method:<br/>
   - Test when called with a valid contributor_id.<br/>
   - Test case: when the round is greater than 1<br/>
   - Test when called with an invalid contributor_id.<br/>
    - Test case: returns true if a submission exists within the previous round's deadline and the current round's deadline<br/>
    - Test case: returns false if no submission exists within the previous round's deadline and the current round's deadline<br/>
   - Test case: when the round is 1<br/>
    - Test case: returns true if a submission exists within the assignment creation date and the current round's deadline<br/>
    - Test case: returns false if no submission exists within the assignment creation date and the current round's deadline<br/>


4. Test <code>`select_metareviewer`</code> method:<br/>
9. Test <code>`submitted_hyperlink`</code> method:<br/>
   - Test when given a valid response map id.<br/>
   - Test case: when there is a submission due date and a submission record<br/>
  - Test when given an invalid response map id.<br/>
    - Test scenario 1: returns the content of the last submitted hyperlink within the assignment creation and due dates<br/>
    - Test scenario 2: returns nil if there is no submission due date or submission record for the reviewee team<br/>
    - Test scenario 3: returns nil if there is a submission due date but no submission record for the reviewee team<br/>
    - Test scenario 4: returns nil if there is a submission record but no submission due date for the reviewee team<br/>


5. Test <code>`add_reviewer`</code> method:<br/>
10. Test <code>`last_modified_date_for_link`</code> method:<br/>
  - Test when the reviewer is not assigned to review their own artifact.<br/>
    - Test case: when given a valid link<br/>
  - Test when the reviewer is assigned to review their own artifact.<br/>
    - Test case: when given an invalid link<br/>
  - Test when the reviewer is already assigned to the contributor.<br/>


6. Test <code>`assign_reviewer_dynamically`</code> method:<br/>
11. Test <code>`link_updated_since_last`?</code> method:<br/>
  - Test when a topic is selected and review is allowed.<br/>
    - Test case: when the link was updated before the submission due date for the current round and after the submission due date for the previous round<br/>
  - Test when no topic is selected and review is allowed.<br/>
    - Test case: when the link was updated after the submission due date for the current round<br/>
  - Test when no topic is selected and review is not allowed.<br/>
    - Test case: when the link was updated before the submission due date for the previous round<br/>
  - Test when a topic is selected and review is not allowed.<br/>
  - Test when there are no available topics to review.<br/>
  - Test when there are no available artifacts to review.<br/>
  - Test when the reviewer has reached the maximum number of outstanding reviews.<br/>


7. Test <code>`review_allowed?`</code> method:<br/>
12. Test <code>`reviewed_link_name_for_team`</code> method:<br/>
  - Test when the reviewer has not reached the maximum number of reviews allowed for the assignment.<br/>
    - Test case: when max_team_size is 1<br/>
  - Test when the reviewer has reached the maximum number of reviews allowed for the assignment.<br/>
    - Test case: when max_team_size is not 1<br/>


8. Test <code>`check_outstanding_reviews?`</code> method:<br/>
13. Test <code>`warded_review_score`</code> method:<br/>
  - Test when there are no review mappings for the assignment and reviewer.<br/>
    - Test case: when reviewer_id and team_id are valid<br/>
  - Test when there are review mappings for the assignment and reviewer, and all reviews are completed.<br/>
      - Test scenario 1: sets the correct awarded review score for each round<br/>
  - Test when there are review mappings for the assignment and reviewer, and some reviews are in progress.<br/>
      - Test scenario 2: sets the correct awarded review score for each round<br/>
      - Test scenario 3: sets the correct awarded review score for each round<br/>
      - Test scenario 4: sets the correct awarded review score for each round<br/>
    - Test case: when team_id is nil or -1.0<br/>
      - Test scenario 1: does not update any instance variables<br/>
      - Test scenario 2: does not update any instance variables<br/>


9. Test <code>`assign_quiz_dynamically`</code> method:<br/>
14. Test <code>`review_metrics`</code> method:<br/>
  - Test when the reviewer has already taken the quiz.<br/>
    - Test case: when given a round and team_id<br/>
  - Test when the reviewer has not taken the quiz yet.<br/>
      - Test case: sets max, min, and avg to '-----' as default values<br/>
  - Test when an error occurs during the assignment process.<br/>
      - Test case: sets max, min, and avg to the corresponding values from avg_and_ranges if present<br/>


10. Test <code>`add_metareviewer`</code> method:<br/>
15. Test <code>`sort_reviewers_by_average_volume`</code> method:<br/>
     - Test when a metareviewer is successfully added.<br/>
     - Test case: sorts the reviewers by the average volume of reviews in each round in descending order<br/>
    - Test when the metareviewer is already assigned to the reviewer.<br/>
    - Test when an error occurs during the process.<br/>


11. Test <code>`assign_metareviewer_dynamically`</code> method:<br/>
16. Test <code>`sort_reviewer_by_review_volume_desc`</code> method:<br/>
     - Test when there are reviews to Meta review.<br/>
     - Test case: when there are reviewers and review volumes available<br/>
    - Test when there are no reviews to Meta review.<br/>
      - Test case: calculates the volume of review comments for each reviewer<br/>
    - Test when an error occurs during assignment of metareviewer.<br/>
      - Test case: sets the overall average volume of review comments for each reviewer<br/>
      - Test case: sets the average volume of review comments per round for each reviewer<br/>
      - Test case: sorts the reviewers by their review volume in descending order<br/>
      - Test case: gets the number of review rounds for the assignment<br/>
      - Test case: sets the average volume of review comments per round for all reviewers<br/>


12. Test <code>`get_reviewer`</code> method:<br/>
17. Test <code>`initialize_chart_elements`</code> method:<br/>
     - Test when the user is a participant in the assignment.<br/>
     - Test case: when reviewer has data for all rounds<br/>
     - Test when the user is not a participant in the assignment.<br/>
    - Test case: when reviewer has no data for any round<br/>
     - Test case: when reviewer has data for some rounds<br/>


13. Test <code>`delete_outstanding_reviewers`</code> method:<br/>
18. Test <code>`display_volume_metric_chart`</code> method:<br/>
     - Test when there are outstanding reviewers.<br/>
     - Test case: when given a reviewer<br/>
    - Test when there are no outstanding reviewers.<br/>
      - Test case: initializes chart elements<br/>
      - Test case: creates the data for the volume metric chart<br/>
      - Test case: creates the options for the volume metric chart<br/>
      - Test case: displays the volume metric chart<br/>


14. Test <code>`delete_all_metareviewers`</code> method:<br/>
19. Test <code>`display_tagging_interval_chart`</code> method:<br/>
     - Test when there are metareview mappings to delete.<br/>
     - Test case: when intervals are all below the threshold<br/>
     - Test when there are unsuccessful deletes.<br/>
     - Test case: when intervals contain some values above the threshold<br/>
     - Test when there are no metareview mappings to delete.<br/>
      - Test case: filters out the values above the threshold<br/>
      - Test case: calculates the mean time spent for the intervals<br/>
      - Test case: displays a chart with the filtered intervals and mean time spent<br/>
     - Test case: when intervals are empty<br/>
      - Test case: does not calculate the mean time spent<br/>
      - Test case: displays a chart with no intervals<br/>


15. Test <code>`unsubmit_review`</code> method:<br/>
20. Test <code>`calculate_key_chart_information`</code> method:<br/>
     - Test when the response is successfully unsubmitted.<br/>
     - Test case: when given intervals are all above the threshold<br/>
     - Test when the response fails to be unsubmitted.<br/>
     - Test case: when given intervals contain values below the threshold<br/>


16. Test <code>`delete_reviewer`</code> method:<br/>
21. Test <code>`calculate_mean`</code> method:<br/>
     - Test when the review response map exists and there are no associated responses.<br/>
     - Test case: when given an array of intervals and interval precision<br/>
    - Test when the review response map exists but there are associated responses.<br/>
      - Test scenario 1: returns the mean of the intervals rounded to the specified precision<br/>
    - Test when the review response map does not exist.<br/>
      - Test scenario 2: returns the mean of the intervals rounded to the specified precision<br/>
      - Test scenario 3: returns the mean of the intervals rounded to the specified precision<br/>


17. Test <code>`delete_metareviewer`</code> method:<br/>
22. Test <code>`calculate_variance`</code> method:<br/>
     - Test when the metareview mapping exists.<br/>
     - Test case: when given an array of intervals and interval precision<br/>
    - Test when the metareview mapping does not exist.<br/>
      - Test scenario 1: returns the variance of the intervals rounded to the specified precision<br/>
      - Test scenario 2: returns the variance of the intervals rounded to the specified precision<br/>
      - Test scenario 3: returns the variance of the intervals rounded to the specified precision<br/>
      - Test scenario 4: returns the variance of the intervals rounded to the specified precision<br/>


19. Test <code>`list_mappings`</code> method.<br/>
23. Test <code>`calculate_standard_deviation`</code> method:<br/>
    - Test case: when given an array of intervals and interval precision<br/>
      - Test scenario 1: returns the standard deviation rounded to the specified precision<br/>
      - Test scenario 2: returns the standard deviation rounded to the specified precision<br/>
      - Test scenario 3: returns the standard deviation rounded to the specified precision<br/>


20. Test <code>`automatic_review_mapping`</code> method.<br/>
24. Test <code>`list_review_submissions`</code> method:<br/>
    - Test case: when review submissions are available<br/>
    - Test case: when review submissions are not available<br/>


21. Test <code>`automatic_review_mapping_strategy`</code> method.<br/>
25. Test <code>`review_submissions_available`?</code> method:<br/>
    - Test case: when both team and participant are present<br/>
    - Test case: when team is nil and participant is present<br/>
    - Test case: when team is present and participant is nil<br/>
    - Test case: when both team and participant are nil<br/>


22. Test <code>`automatic_review_mapping_staggered`</code> method.<br/>
26. Test <code>`list_hyperlink_submission`</code> method:<br/>
    - Test case: when the response map ID and question ID are valid<br/>
      - Test case: returns the HTML code for a hyperlink if the answer has a comment starting with 'http'<br/>
      - Test case: returns an empty string if the answer does not have a comment starting with 'http'<br/>
    - Test case: when the response map ID or question ID is invalid<br/>
      - Test case: returns an empty string if the response map ID is invalid<br/>
      - Test case: returns an empty string if the question ID is invalid<br/>


23. Test <code>`save_grade_and_comment_for_reviewer`</code> method.<br/>
27. Test <code>`calculate_review_and_feedback_responses`</code> method:<br/>
    - Test case: when author is a member of a team<br/>
    - Test case: when author is not a member of a team<br/>


24. Test <code>`start_self_review`</code> method.<br/>
28. Test <code>`feedback_response_map_record`</code> method:<br/>
    - Test case: when author is provided<br/>
      - Test case: retrieves response records for each round<br/>
      - Test case: calculates feedback response map records for each round<br/>


25. Test <code>`assign_reviewers_for_team`</code> method.<br/>
29. Test <code>`get_certain_review_and_feedback_response_map`</code> method:<br/>
    - Test case: when author has feedback response maps<br/>
    - Test case: when author does not have feedback response maps<br/>
    - Test case: when review response maps exist for the given reviewed object and reviewee<br/>
    - Test case: when review response maps do not exist for the given reviewed object and reviewee<br/>
    - Test case: when review responses exist for the given review response map ids<br/>
    - Test case: when review responses do not exist for the given review response map ids<br/>
    - Test case: when review responses exist<br/>
    - Test case: when review responses do not exist<br/>


26. Test <code>`peer_review_strategy`</code> method.<br/>
30. Test <code>`css_class_for_calibration_report`</code> method:<br/>
    - Test case: when the difference is 0<br/>
    - Test case: when the difference is 1<br/>
    - Test case: when the difference is 2<br/>
    - Test case: when the difference is 3<br/>
    - Test case: when the difference is greater than 3<br/>


27. Test <code>`review_mapping_params`</code> method.<br/>
31. Test <code>`initialize`</code> method:<br/>
    - Test case: when initializing a new instance of the class<br/>
      - Test case: sets the participants attribute to the provided value<br/>
      - Test case: sets the teams attribute to the provided value<br/>
      - Test case: sets the review_num attribute to the provided value<br/>
 
32. Test <code>`reviews_per_team`</code> method:<br/>
    - Test case: when there are 10 participants, 5 teams, and each participant needs to review 2 times<br/>
    - Test case: when there are 20 participants, 4 teams, and each participant needs to review 3 times<br/>
    - Test case: when there are 8 participants, 2 teams, and each participant needs to review 4 times<br/>
 
33. Test <code>`reviews_needed`</code> method:<br/>
    - Test case: when there are no participants<br/>
    - Test case: when there is one participant and review number is 3<br/>
    - Test case: when there are three participants and review number is 2<br/>
    - Test case: when there are five participants and review number is 4<br/>
 
34. Test <code>`reviews_per_student`</code> method:<br/>
    - Test case: when there are no reviews<br/>
    - Test case: when there is only one review<br/>
    - Test case: when there are multiple reviews<br/>


== Design Pattern ==
== Design Pattern ==
Line 203: Line 309:


Additionally, we addressed the issue of excessive conditional statements by employing the "Refactoring Conditionals" design pattern. Instead of cluttering the code with numerous conditionals, we refactored by encapsulating the logic within these conditionals into distinct methods. By doing so, we streamlined the code's flow and improved its readability, making it clearer to understand the purpose and execution of each segment.
Additionally, we addressed the issue of excessive conditional statements by employing the "Refactoring Conditionals" design pattern. Instead of cluttering the code with numerous conditionals, we refactored by encapsulating the logic within these conditionals into distinct methods. By doing so, we streamlined the code's flow and improved its readability, making it clearer to understand the purpose and execution of each segment.


== Relevant Links ==
== Relevant Links ==

Revision as of 20:17, 24 March 2024

This wiki page describes changes made under the E2405 OODD assignment for Spring 2024, CSC/ECE 517.

Expertiza Background

Expertiza is an open-source online application developed using Ruby on Rails framework. It is maintained by the staff and students at NC State University. This application provides instructors with comprehensive control over managing tasks and assignments in their courses. Expertiza offers a wide range of powerful features, including peer review management, group formation, and subject addition capabilities. It is a versatile platform that can handle various types of assignments. For more detailed information about the extensive features offered by Expertiza, users can refer to the Expertiza wiki.

About Helper

The review_mapping_helper module in Ruby on Rails provides a set of helper methods to facilitate the peer review process in an assignment. It includes functionality for generating review reports, managing submission statuses, calculating review scores, and visualizing review metrics, but requires refactoring to improve code maintainability and readability.

Functionality of review_mapping_controller

The review_mapping_helper.rb file is a Ruby module that contains various helper methods related to the review mapping process in a peer review system. Here's a brief overview of the main functionalities provided by this module:

1. Generating review reports with data such as reviewer IDs, reviewed object IDs, and response types.
2. Determining team colors based on review status and assignment submission status.
3. Checking submission states within each round and assigning team colors accordingly.
4. Retrieving and displaying submitted hyperlinks and files for review.
5. Calculating awarded review scores and determining minimum, maximum, and average grade values.
6. Sorting reviewers based on the average volume of reviews in each round.
7. Generating and displaying charts for review volume metrics and tagging time intervals.
8. Retrieving and setting up review and feedback responses for feedback reports.
9. Determining CSS styles for the calibration report based on the difference between student and instructor answers.
10. Defining and calculating review strategies for students and teams, including reviews per team, reviews needed, and reviews per student.

Problem Statement

Because of its length, complexity, and lack of comments, the review_mapping_helper is challenging for developers to understand and utilize effectively. The controller should go through a thorough restructuring process to remedy this, with the goal of dividing complex procedures into smaller, easier-to-manage parts. To do this, complex logic must be broken down into modular parts, each of which is in charge of a particular task or subtask that contributes to the overall functioning. In order to improve maintainability and lower the chance of errors, the refactoring effort should also focus on cases of code duplication, combining redundant code segments into reusable functions or utility methods. Developers can better grasp the inner workings of the controller codebase and make maintenance, debugging, and future additions easier by methodically rearranging the codebase and increasing its documentation.

Tasks

- Refactor the file to reduce the overall lines of code to be within the allowed limit of 250 lines.
- Refactor the `display_volume_metric_chart` method to reduce its lines of code to be within the allowed limit of 25 lines.
- Refactor the `display_tagging_interval_chart` method to reduce its lines of code to be within the allowed limit of 25 lines.
- Refactor the `check_submission_state` method to reduce its Cognitive Complexity to be within the allowed limit of 5 and reduce the number of arguments to be within the allowed limit of 4.
- Refactor the `sort_reviewer_by_review_volume_desc` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Refactor the `review_metrics` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Refactor the `get_team_color` method to reduce its Cognitive Complexity to be within the allowed limit of 5.
- Reduce the number of arguments for the `check_submission_state` method from 5 to 4.

Phase 1

For Phase 1 of the project we have focused working on the below mentioned issues:
- Refactor the `display_volume_metric_chart` method.
- Refactor the `review_metrics` method.
- Commented the code.
- Fixed Code climate issues.

Phase 2

For Phase 2 of the project we plan working on the below mentioned issues:
- Refactor the `display_tagging_interval_chart` method.
- Refactor the `check_submission_state` method.
- Refactor the `sort_reviewer_by_review_volume_desc` method.
- Refactor the `get_team_color` method.
- Reduce the number of arguments for the `check_submission_state` method.
- Increase the test coverage.
- Increase code readability.

Implementation

Phase 1

add_reviewer

The changes made to the `add_reviewer` method can be checked in the commit - https://github.com/NidhayPancholi/expertiza/commit/b325810d67da2a03d3ccb15458926d0049fdb9eb. The changes are described descriptively below.:

- Refactored the `add_reviewer` method to focus on single tasks per method, enhancing code readability and maintainability.
- Extracted the functionality to find a user's ID by name into a separate method named `find_user_id_by_name`.
- Separated the logic to check if a user is trying to review their own artifact into its own method named `user_trying_to_review_own_artifact?`.
- Abstracted the process of assigning a reviewer to the assignment into a method named `assign_reviewer`.
- Created a method named `registration_url` to generate the registration URL for the assignment based on provided parameters.
- Divided the code to create a review response map into a separate method named `create_review_response_map`.
- Extracted the logic to redirect to the list mappings page after adding the reviewer into its own method named `redirect_to_list_mappings`.
- Added descriptive comments to each method to explain its purpose and functionality clearly.

assign_reviewer_dynamically

The changes made to the `assign_reviewer_dynamically` method can be checked in the commit - https://github.com/NidhayPancholi/expertiza/commit/30a3625d4188e56a58e4b6472c52b60bbfb83df5. The changes are described descriptively below:

- Restructured the `assign_reviewer_dynamically` method to perform single tasks per method, improving code organization and readability.
- Extracted the functionality to find the assignment participant into a separate method called `find_participant_for_assignment`.
- Abstracted the logic to handle errors when no topic is selected into a method named `topic_selection_error?`.
- Created a method named `dynamically_assign_reviewer` to handle the process of dynamically assigning a reviewer based on the assignment type.
- Separated the logic to assign a reviewer when the assignment has topics into a method named `assign_reviewer_with_topic`.
- Developed a method called `select_topic_to_review` to handle the selection of a topic for review.
- Extracted the logic to assign a reviewer when the assignment has no topics into a method named `assign_reviewer_without_topic`.
- Created a method named `select_assignment_team_to_review` to handle the selection of an assignment team for review.
- Abstracted the process to redirect to the student review list page into a method called `redirect_to_student_review_list`.
- Added clear comments to each method to explain its purpose and functionality effectively.

Changes to the spec file

The changes made to the test files are described below and can be found in the commit - https://github.com/expertiza/expertiza/commit/7c08070f0c2c000e64e55561b882e44fc81bc98f:

- Updated the `ReviewMappingController` spec file.
- Added a test case in the `ReviewMappingController` spec file for the `add_reviewer` method to ensure correct behavior when a team user exists and `get_reviewer` method returns a reviewer.
- Adjusted the expectation in the `assign_reviewer_dynamically` test case to match the corrected error message in the controller. Specifically, removed the extra space from the expected error message to align with the actual error message generated by the controller.
- Ensured that all test cases are descriptive and cover the relevant scenarios for each method.
- Verified that the test cases accurately reflect the behavior of the controller methods after the code changes.

Test Plan

We intend to expand Project 4's test coverage by introducing more tests.

Throughout our Test-Driven Development (TDD) efforts, our group would systematically use the painstakingly constructed test skeletons of Vyshnavi Adusumelli and Mustafa Olmez. These painstakingly crafted skeletons will function as priceless blueprints, outlining the fundamental tests required for our unit. We thoroughly test our controller module by including these painstakingly created test skeletons into our workflow. With this approach, we would be able to investigate the workings of our controller in great detail, making sure that its actions are carefully reviewed and verified. Adding these test skeletons will improve our codebase's general quality and reliability while also providing a solid foundation for our unit tests.

Test Plan

1. Test `create_report_table_header` method:

  - Test case: when headers are provided
- Test case: when no headers are provided

2. Test `review_report_data` method:

  - Test case: when valid parameters are provided
- Test scenario 1: when there are response maps for the given reviewed object, reviewer, and type
- Test scenario 2: when there are no response maps for the given reviewed object, reviewer, and type
- Test case: when invalid parameters are provided
- Test scenario 1: when the reviewed object does not exist
- Test scenario 2: when the reviewer does not exist
- Test scenario 3: when the type is invalid

3. Test `calculate_response_counts` method:

  - Test case: when given an empty response_maps array
- Test case: when given response_maps with no responses for any round
- Test case: when given response_maps with responses for all rounds
- Test case: when given response_maps with responses for some rounds

4. Test `get_team_color` method:

  - Test case: when a response exists for the response map
- Test case: when no response exists for the response map
- Test case: calls obtain_team_color method with the correct arguments

5. Test `obtain_team_color` method:

  - Test case: when there is only one round of review
- Test case: when there are multiple rounds of review
- Test case: when there are no rounds of review

6. Test `check_submission_state` method:

  - Test case: when the submission is within the round
- Test case: when the submission is not within the round
- Test case: when the link is not provided or does not start with 'https://wiki'
- Test case: when the link is provided and starts with 'https://wiki'
- Test case: when the link has been updated since the last round
- Test case: when the link has not been updated since the last round

7. Test `response_for_each_round`? method:

  - Test case: when all rounds have a response
- Test case: when not all rounds have a response
- Test case: when no rounds have a response

8. Test `submitted_within_round`? method:

  - Test case: when the round is greater than 1
- Test case: returns true if a submission exists within the previous round's deadline and the current round's deadline
- Test case: returns false if no submission exists within the previous round's deadline and the current round's deadline
- Test case: when the round is 1
- Test case: returns true if a submission exists within the assignment creation date and the current round's deadline
- Test case: returns false if no submission exists within the assignment creation date and the current round's deadline

9. Test `submitted_hyperlink` method:

  - Test case: when there is a submission due date and a submission record
- Test scenario 1: returns the content of the last submitted hyperlink within the assignment creation and due dates
- Test scenario 2: returns nil if there is no submission due date or submission record for the reviewee team
- Test scenario 3: returns nil if there is a submission due date but no submission record for the reviewee team
- Test scenario 4: returns nil if there is a submission record but no submission due date for the reviewee team

10. Test `last_modified_date_for_link` method:

   - Test case: when given a valid link
- Test case: when given an invalid link

11. Test `link_updated_since_last`? method:

   - Test case: when the link was updated before the submission due date for the current round and after the submission due date for the previous round
- Test case: when the link was updated after the submission due date for the current round
- Test case: when the link was updated before the submission due date for the previous round

12. Test `reviewed_link_name_for_team` method:

   - Test case: when max_team_size is 1
- Test case: when max_team_size is not 1

13. Test `warded_review_score` method:

   - Test case: when reviewer_id and team_id are valid
- Test scenario 1: sets the correct awarded review score for each round
- Test scenario 2: sets the correct awarded review score for each round
- Test scenario 3: sets the correct awarded review score for each round
- Test scenario 4: sets the correct awarded review score for each round
- Test case: when team_id is nil or -1.0
- Test scenario 1: does not update any instance variables
- Test scenario 2: does not update any instance variables

14. Test `review_metrics` method:

   - Test case: when given a round and team_id
- Test case: sets max, min, and avg to '-----' as default values
- Test case: sets max, min, and avg to the corresponding values from avg_and_ranges if present

15. Test `sort_reviewers_by_average_volume` method:

   - Test case: sorts the reviewers by the average volume of reviews in each round in descending order

16. Test `sort_reviewer_by_review_volume_desc` method:

   - Test case: when there are reviewers and review volumes available
- Test case: calculates the volume of review comments for each reviewer
- Test case: sets the overall average volume of review comments for each reviewer
- Test case: sets the average volume of review comments per round for each reviewer
- Test case: sorts the reviewers by their review volume in descending order
- Test case: gets the number of review rounds for the assignment
- Test case: sets the average volume of review comments per round for all reviewers

17. Test `initialize_chart_elements` method:

   - Test case: when reviewer has data for all rounds
- Test case: when reviewer has no data for any round
- Test case: when reviewer has data for some rounds

18. Test `display_volume_metric_chart` method:

   - Test case: when given a reviewer
- Test case: initializes chart elements
- Test case: creates the data for the volume metric chart
- Test case: creates the options for the volume metric chart
- Test case: displays the volume metric chart

19. Test `display_tagging_interval_chart` method:

   - Test case: when intervals are all below the threshold
- Test case: when intervals contain some values above the threshold
- Test case: filters out the values above the threshold
- Test case: calculates the mean time spent for the intervals
- Test case: displays a chart with the filtered intervals and mean time spent
- Test case: when intervals are empty
- Test case: does not calculate the mean time spent
- Test case: displays a chart with no intervals

20. Test `calculate_key_chart_information` method:

   - Test case: when given intervals are all above the threshold
- Test case: when given intervals contain values below the threshold

21. Test `calculate_mean` method:

   - Test case: when given an array of intervals and interval precision
- Test scenario 1: returns the mean of the intervals rounded to the specified precision
- Test scenario 2: returns the mean of the intervals rounded to the specified precision
- Test scenario 3: returns the mean of the intervals rounded to the specified precision

22. Test `calculate_variance` method:

   - Test case: when given an array of intervals and interval precision
- Test scenario 1: returns the variance of the intervals rounded to the specified precision
- Test scenario 2: returns the variance of the intervals rounded to the specified precision
- Test scenario 3: returns the variance of the intervals rounded to the specified precision
- Test scenario 4: returns the variance of the intervals rounded to the specified precision

23. Test `calculate_standard_deviation` method:

   - Test case: when given an array of intervals and interval precision
- Test scenario 1: returns the standard deviation rounded to the specified precision
- Test scenario 2: returns the standard deviation rounded to the specified precision
- Test scenario 3: returns the standard deviation rounded to the specified precision

24. Test `list_review_submissions` method:

   - Test case: when review submissions are available
- Test case: when review submissions are not available

25. Test `review_submissions_available`? method:

   - Test case: when both team and participant are present
- Test case: when team is nil and participant is present
- Test case: when team is present and participant is nil
- Test case: when both team and participant are nil

26. Test `list_hyperlink_submission` method:

   - Test case: when the response map ID and question ID are valid
- Test case: returns the HTML code for a hyperlink if the answer has a comment starting with 'http'
- Test case: returns an empty string if the answer does not have a comment starting with 'http'
- Test case: when the response map ID or question ID is invalid
- Test case: returns an empty string if the response map ID is invalid
- Test case: returns an empty string if the question ID is invalid

27. Test `calculate_review_and_feedback_responses` method:

   - Test case: when author is a member of a team
- Test case: when author is not a member of a team

28. Test `feedback_response_map_record` method:

   - Test case: when author is provided
- Test case: retrieves response records for each round
- Test case: calculates feedback response map records for each round

29. Test `get_certain_review_and_feedback_response_map` method:

   - Test case: when author has feedback response maps
- Test case: when author does not have feedback response maps
- Test case: when review response maps exist for the given reviewed object and reviewee
- Test case: when review response maps do not exist for the given reviewed object and reviewee
- Test case: when review responses exist for the given review response map ids
- Test case: when review responses do not exist for the given review response map ids
- Test case: when review responses exist
- Test case: when review responses do not exist

30. Test `css_class_for_calibration_report` method:

   - Test case: when the difference is 0
- Test case: when the difference is 1
- Test case: when the difference is 2
- Test case: when the difference is 3
- Test case: when the difference is greater than 3

31. Test `initialize` method:

   - Test case: when initializing a new instance of the class
- Test case: sets the participants attribute to the provided value
- Test case: sets the teams attribute to the provided value
- Test case: sets the review_num attribute to the provided value

32. Test `reviews_per_team` method:

   - Test case: when there are 10 participants, 5 teams, and each participant needs to review 2 times
- Test case: when there are 20 participants, 4 teams, and each participant needs to review 3 times
- Test case: when there are 8 participants, 2 teams, and each participant needs to review 4 times

33. Test `reviews_needed` method:

   - Test case: when there are no participants
- Test case: when there is one participant and review number is 3
- Test case: when there are three participants and review number is 2
- Test case: when there are five participants and review number is 4

34. Test `reviews_per_student` method:

   - Test case: when there are no reviews
- Test case: when there is only one review
- Test case: when there are multiple reviews

Design Pattern

During our code refactoring process, we leveraged various design patterns to enhance readability and maintainability. One commonly applied pattern was "Extract Method," where we identified lengthy and intricate methods and extracted segments of functionality into separate methods. This restructuring made the code more comprehensible and easier to grasp by isolating specific tasks within dedicated methods.

Additionally, we addressed the issue of excessive conditional statements by employing the "Refactoring Conditionals" design pattern. Instead of cluttering the code with numerous conditionals, we refactored by encapsulating the logic within these conditionals into distinct methods. By doing so, we streamlined the code's flow and improved its readability, making it clearer to understand the purpose and execution of each segment.

Relevant Links

Team

Mentor

  • Ananya Mantravadi (amantra)

Team Members

  • Sahil Changlani (schangl)
  • Rushil Vegada (rvegada)
  • Romil Shah (rmshah3)