CSC/ECE 517 Fall 2020 - E2087. Conflict notification. Improve Search Facility In Expertiza
Introduction
Expertiza includes the functionality of notifying instructors when a conflict occurs in review scores for a submission. Currently, when two reviews for the same submission differ significantly, an email is sent to the instructor of the course, but there is no link to the review that caused the conflict. This improvement will allow the professor to have links to the reviews that caused the conflict and will be formatted better to help the instructor understand the conflict.
Issues with previous submission
- The functionality is good but the UI of conflict report needs work.
- The UI needs to be cleaned up a little. When charts have only one or two bars, the chart can be compressed. The reviewer whose scores deviate from the threshold can be displayed in a different colored bar.
- Tests need to be refactored.
- They included their debug code in their pull request.
- They have included a lot of logic in the views.
- Shallow tests: one or more of their test expectations only focus on the return value not being `nil`, `empty` or not equal to `0` without testing the `real` value.
Proposed Solution
The previous team implemented new logic to determine if a review is in conflict with another and created a new page to link the instructors to in the email. Because their functionality was good, we will mainly be focused on improving the UI of the conflict reports. Furthermore, there is a lot of logic that lives in the views that can be refactored and moved to controllers. Additionally, there is debug code that can be removed and tests that can be fleshed out.
UI Imporvements
File: app/views/reports/_review_conflict_metric.html.erb
The UI can be improved for the conflict metric to give more information on the conflicted reviews.
Logic in Views
File: app/views/reports/_review_conflict_metric.html.erb
The logic to determine if answers are within or outside of the tolerance limits can be moved to functions outside of the .erb file.
UML Activity Diagram
Test Plan
Manual Test Plan
To verify the conflict notification is working correctly, a mock assignment will be created and two reviews will be entered that should trigger a conflict. Successful retrieval of the email and verification of the links included in the email will provide sufficient verification that the changes were successful.
File Changes
File: spec/models/answer_spec.rb
Currently, tests only check if the values are not empty. More tests can be written to make sure the actual value is being returned correctly.
File: spec/controllers/reports_controller_spec.rb
More tests can be written to ensure correct names and values are written.
Implemented Solution
Refactoring
Moved following methods from review_mapping_helper.rb to report_formatter_helper.rb
- average_of_round
- std_of_round
- review_score_helper_for_team
- review_score_for_team
New Implementation
Adding base_url from the controller as this method is not accessible from model
@response.notify_instructor_on_difference(request.base_url) if (@map.is_a? ReviewResponseMap) && @response.is_submitted && @response.significant_difference?
if (@map.is_a? ReviewResponseMap) && (was_submitted == false && @response.is_submitted) && @response.significant_difference? @response.notify_instructor_on_difference @response.notify_instructor_on_difference(request.base_url) @response.email end
This method generates statistics (average, standard deviation, tolerance limits) for each round. This method was created to move logic that was previously contained in a view to a helper class.
def review_statistics(id, team_name) res = [] question_answers = review_score_for_team(id, team_name) question_answers.each do |question_answer| round = {} round[:question_answer] = question_answer round[:average] = average_of_round(question_answer) round[:std] = std_of_round(round[:average], question_answer) round[:upper_tolerance_limit] = (round[:average]+(2*round[:std])).round(2) round[:lower_tolerance_limit] = (round[:average]-(2*round[:std])).round(2) res.push(round) end res end
This logic highlights all reviews' score in red which is out of limit range, and all other background color will be green.
if answer > upper_tolerance_limit or answer < lower_tolerance_limit colors << "rgba(255,99,132,0.8)" # red end else colors << "rgba(77, 175, 124, 1)" # green end
Passing the colors to bar background.
backgroundColor: colors
comprising the chart height for how much it need to clearly display all data.
height: 4*question_answer.size() + 14
UI Improvements
Based on the previous changes from last group, the notification email triggered will send to the instructor whenever a new review is significantly different from other people's review, who has the same submission team (the threshold is specified in the "Notification limit" on the Rubric tab of assignment creation). The graph showing below is the previous team email UI.
On the email notification, the formatting can be improved to make it more readable and clear as to which links relate to what actions and what caused the conflict to trigger.
The previous group of snapshot for the view which the instructor receives
On the 'review for mail check' page, more information can be added to indicate if a review has a conflict. We will add indicators beside each review that have conflicts. We added a link for review_conflict_report_url, which will let instructor to access the review conflict report page by the email he received.
Chart before improvements.
Chart after improvements.
The improved chart has the student name on the y-axis, scaled chart sizes, and more intuitive highlighting with red bars indicating an out of limits score. This is a great improvement to the previous implementation where only the student name was highlighted to indicate an out of limit score.
Improved Tests
Added additional tests to cover the average and standard deviation calculations.
File: spec/helpers/review_mapping_helper_spec.rb
describe '#average_of_round' do it 'should return correct average' do question_answer = {'a'=> 1, 'b'=>2, 'c'=>3} expect(helper.average_of_round(question_answer)).to eq(2) end end describe '#std_of_round' do it 'should return correct standard deviation' do question_answer = {'a'=> 1, 'b'=>2, 'c'=>3} expect(helper.std_of_round(2, question_answer)).to eq(0.82) end end
File: spec/controllers/reports_controller_spec.rb
Improved tests for conflict report controller.
describe '#review_conflict_response_map' do context 'when type is ReviewConflictResponseMap' do it 'renders response_report page with corresponding data' do allow(Team).to receive(:where).with(parent_id: '1').and_return([test]).ordered params = { id: 1, report: {type: 'ReviewConflictResponseMap'}, user: 'no one' } get :response_report, params expect(response).to render_template(:response_report) end end end describe 'reviewers_name_id_by_reviewee_and_assignment' do before(:each) do @assignment_id = 1 @reviewee_id = 1 end it 'returns reviewers name from Answer by reviewee and assignment id from db which is not empty' do allow(User).to receive(:where).with(@reviewee_id, @assignment_id).and_return([user1]) expect(User.where(@reviewee_id, @assignment_id)).to eq([user1]) end end
Relevant Links
GitHub - https://github.com/salmonandrew/expertiza
Pull Request - https://github.com/expertiza/expertiza/pull/1840
Demo Video - https://youtu.be/D_e80coRkLk
Team
Mentor: Sanket Pai (sgpai2@ncsu.edu)
- Yen-An Jou (yjou@ncsu.edu)
- Xiwen Chen (xchen33@ncsu.edu)
- Andrew Hirasawa (achirasa@ncsu.edu)
- Derrick Li (xli56@ncsu.edu)
References / Previous Implementation
- Previous Wikipedia page - E1865 Conflict Notification Fall 2018
- Previous Final Video - Final Video
- Previous Pull Request -Pull Request 1445
- Previous Github Repo - Github Repo