<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Atrived</id>
	<title>Expertiza_Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Atrived"/>
	<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=Special:Contributions/Atrived"/>
	<updated>2026-05-17T20:03:00Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131342</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131342"/>
		<updated>2019-12-17T11:12:28Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Review Update */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''Problem Statement''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Generalize_review.jpg]]&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Only the checkboxes for email preferences on user profile will be moved closer to the text to be more visible.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The checkboxes will be clearly visible for email preferences. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
There are no performance overhead created while meeting the requirements.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
== '''Proposed Solution''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black. &lt;br /&gt;
&lt;br /&gt;
Possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnaire.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
The score calculation was initially tested manually to verify the scenarios with multiple weights and rounds. &lt;br /&gt;
Also, existing spec for score calculation with a weighted average for both varying and non-varying rubric is covering this scenario.&lt;br /&gt;
The spec is written under /spec/models/assignment_spec.rb. The &amp;quot;score&amp;quot; function of the &amp;quot;model/assignment.rb&amp;quot; is called for online (on the fly score calculation).&lt;br /&gt;
&lt;br /&gt;
Following file has the spec for weighted average score calculation.&lt;br /&gt;
spec/models/assignment_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;This feature can be broken down into the following sub-cases:&lt;br /&gt;
&amp;lt;br&amp;gt;1. A submission can only be made before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;2. A review can edit a review only before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;3. Score calculation should be only done for latest submitted review&lt;br /&gt;
&lt;br /&gt;
Subcases:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
1. Spec to ensure submission can't be made after the deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
  it 'should not allow submission after deadline' do&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    # goto student_task page, which has link to &amp;quot;Your work&amp;quot;&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
    # the page will have text &amp;quot;Your work&amp;quot; but will be grayed&lt;br /&gt;
    expect(page).to have_content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    # the page will not have link to content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    expect{click_link &amp;quot;Your work&amp;quot;}.to raise_error(Capybara::ElementNotFound)&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
2. Spec to ensure review can be done only before deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
  it &amp;quot;should not be able to review work after deadline&amp;quot; do&lt;br /&gt;
    # The spec is written to reproduce following bug. &amp;quot;Others' work&amp;quot; link open after deadline passed&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
    # the page should have content, but after deadline passes it is displayed as gray&lt;br /&gt;
    # but there should not be any link attached to it&lt;br /&gt;
    expect(page).to have_content &amp;quot;Others' work&amp;quot;&lt;br /&gt;
    # this is the bug, even after deadline has passed, the link is still present&lt;br /&gt;
    # the ui comment in file views/student_task/view.html.erb says&lt;br /&gt;
    # &amp;lt;!--Akshay: Fix Issue 1218 - this link is disabled if assignment does not require any peer reviews--&amp;gt;&lt;br /&gt;
    # But the link seems to be open even after deadline passed.&lt;br /&gt;
    # Screenshot attached as part of wiki for E1975, Fall 2019&lt;br /&gt;
    expect(page).to have_link(&amp;quot;Others' work&amp;quot;, &amp;quot;/student_review/list?id=1&amp;quot;)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The test scenario is able to produce the following bug. At times the review link is open even after the deadline has passed. The bug is also shown/reproduced in the following screenshot for Program 1.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenShotReview.jpg]]&lt;br /&gt;
&lt;br /&gt;
3. Score calculation&lt;br /&gt;
&lt;br /&gt;
Score calculation for assignments for review is part of models/assignment.rb. The score calculation is handled by function &amp;quot;score&amp;quot; in this file and the testing is done for both multipart varying and non-varying rubric.&lt;br /&gt;
The respective spec is written in file spec/models/assignment_spec.rb. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
describe '#scores' do&lt;br /&gt;
    context 'when assignment is varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores in each round of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review1: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(true)&lt;br /&gt;
        allow(assignment).to receive(:num_review_rounds).and_return(1)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_responses_for_team_round).with(team, 1).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review1: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90.0}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    context 'when assignment is not varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(false)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_assessments_for).with(team).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The test for &amp;quot;submission after deadline&amp;quot; and &amp;quot;review after deadline&amp;quot; are added as they were not present. A new spec file under spec/features by name post_deadline_review_submission_spec.rb has been created to accommodate the two cases.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Usecasenew.png]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
&lt;br /&gt;
New test cases will be added to test and validate the functionalities which are not working as expected. Also, the test cases for features that will be newly added to the system, will also be written. &lt;br /&gt;
Test cases will be added to these files:&lt;br /&gt;
&lt;br /&gt;
spec/features/post_deadline_review_submission_sepc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
spec/models/assignment_participant_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scenarios covered by the newly introduced test cases are as follows: &amp;lt;br&amp;gt;&lt;br /&gt;
1. Zero weighted scores with gray mark: rspec spec/model/assignment_participants.rb&amp;lt;br&amp;gt;&lt;br /&gt;
2. Non-zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
3. Review update: rspec  spec/features/post_deadline_review_submission_sepc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
'''Step 1'''. The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 2'''. The author submits the work done in expertiza.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 3'''. Peers check the submitted work and give reviews for it.&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Test 1: Zero weighted scores shown in gray.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
Zero weighted scores should be displayed in gray to the logged in user.&lt;br /&gt;
 &lt;br /&gt;
'''Test 2: Information button.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
'''Step 4'''. The mentor evaluates the final submission and give grades.&lt;br /&gt;
 &lt;br /&gt;
'''Test 3: Display non-zero weighted scores'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The scores shown for the assignment should be the weighted average of the scores.&lt;br /&gt;
The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
'''Step 5''': The author updates the work and re-submits it again.&lt;br /&gt;
 &lt;br /&gt;
'''Test 4: Review Update'''&amp;lt;br&amp;gt;&lt;br /&gt;
The Review should be reopened. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
If the review is not updated the previous review score has to be retained.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131341</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131341"/>
		<updated>2019-12-17T11:10:08Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Performance Requirements */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''Problem Statement''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Generalize_review.jpg]]&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Only the checkboxes for email preferences on user profile will be moved closer to the text to be more visible.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The checkboxes will be clearly visible for email preferences. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
There are no performance overhead created while meeting the requirements.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
== '''Proposed Solution''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black. &lt;br /&gt;
&lt;br /&gt;
Possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnaire.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
The score calculation was initially tested manually to verify the scenarios with multiple weights and rounds. &lt;br /&gt;
Also, existing spec for score calculation with a weighted average for both varying and non-varying rubric is covering this scenario.&lt;br /&gt;
The spec is written under /spec/models/assignment_spec.rb. The &amp;quot;score&amp;quot; function of the &amp;quot;model/assignment.rb&amp;quot; is called for online (on the fly score calculation).&lt;br /&gt;
&lt;br /&gt;
Following file has the spec for weighted average score calculation.&lt;br /&gt;
spec/models/assignment_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;This feature can be broken down into the following sub-cases:&lt;br /&gt;
&amp;lt;br&amp;gt;1. A submission can only be made before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;2. A review can edit a review only before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;3. Score calculation should be only done for latest submitted review&lt;br /&gt;
&lt;br /&gt;
Subcases:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
1. Spec to ensure submission can't be made after the deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
  it 'should not allow submission after deadline' do&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    # goto student_task page, which has link to &amp;quot;Your work&amp;quot;&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
    # the page will have text &amp;quot;Your work&amp;quot; but will be grayed&lt;br /&gt;
    expect(page).to have_content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    # the page will not have link to content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    expect{click_link &amp;quot;Your work&amp;quot;}.to raise_error(Capybara::ElementNotFound)&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
2. Spec to ensure review can be done only before deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
  it &amp;quot;should not be able to review work after deadline&amp;quot; do&lt;br /&gt;
    # The spec is written to reproduce following bug. &amp;quot;Others' work&amp;quot; link open after deadline passed&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
    # the page should have content, but after deadline passes it is displayed as gray&lt;br /&gt;
    # but there should not be any link attached to it&lt;br /&gt;
    expect(page).to have_content &amp;quot;Others' work&amp;quot;&lt;br /&gt;
    # this is the bug, even after deadline has passed, the link is still present&lt;br /&gt;
    # the ui comment in file views/student_task/view.html.erb says&lt;br /&gt;
    # &amp;lt;!--Akshay: Fix Issue 1218 - this link is disabled if assignment does not require any peer reviews--&amp;gt;&lt;br /&gt;
    # But the link seems to be open even after deadline passed.&lt;br /&gt;
    # Screenshot attached as part of wiki for E1975, Fall 2019&lt;br /&gt;
    expect(page).to have_link(&amp;quot;Others' work&amp;quot;, &amp;quot;/student_review/list?id=1&amp;quot;)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The test scenario is able to produce the following bug. At times the review link is open even after the deadline has passed. The bug is also shown/reproduced in the following screenshot for Program 1.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenShotReview.jpg]]&lt;br /&gt;
&lt;br /&gt;
3. Score calculation&lt;br /&gt;
&lt;br /&gt;
Score calculation for assignments for review is part of models/assignment.rb. The score calculation is handled by function &amp;quot;score&amp;quot; in this file and the testing is done for both multipart varying and non-varying rubric.&lt;br /&gt;
The respective spec is written in file spec/models/assignment_spec.rb. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
describe '#scores' do&lt;br /&gt;
    context 'when assignment is varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores in each round of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review1: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(true)&lt;br /&gt;
        allow(assignment).to receive(:num_review_rounds).and_return(1)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_responses_for_team_round).with(team, 1).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review1: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90.0}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    context 'when assignment is not varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(false)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_assessments_for).with(team).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Usecasenew.png]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
&lt;br /&gt;
New test cases will be added to test and validate the functionalities which are not working as expected. Also, the test cases for features that will be newly added to the system, will also be written. &lt;br /&gt;
Test cases will be added to these files:&lt;br /&gt;
&lt;br /&gt;
spec/features/post_deadline_review_submission_sepc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
spec/models/assignment_participant_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scenarios covered by the newly introduced test cases are as follows: &amp;lt;br&amp;gt;&lt;br /&gt;
1. Zero weighted scores with gray mark: rspec spec/model/assignment_participants.rb&amp;lt;br&amp;gt;&lt;br /&gt;
2. Non-zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
3. Review update: rspec  spec/features/post_deadline_review_submission_sepc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
'''Step 1'''. The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 2'''. The author submits the work done in expertiza.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 3'''. Peers check the submitted work and give reviews for it.&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Test 1: Zero weighted scores shown in gray.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
Zero weighted scores should be displayed in gray to the logged in user.&lt;br /&gt;
 &lt;br /&gt;
'''Test 2: Information button.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
'''Step 4'''. The mentor evaluates the final submission and give grades.&lt;br /&gt;
 &lt;br /&gt;
'''Test 3: Display non-zero weighted scores'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The scores shown for the assignment should be the weighted average of the scores.&lt;br /&gt;
The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
'''Step 5''': The author updates the work and re-submits it again.&lt;br /&gt;
 &lt;br /&gt;
'''Test 4: Review Update'''&amp;lt;br&amp;gt;&lt;br /&gt;
The Review should be reopened. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
If the review is not updated the previous review score has to be retained.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131340</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131340"/>
		<updated>2019-12-17T11:09:53Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Performance Requirements */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''Problem Statement''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Generalize_review.jpg]]&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Only the checkboxes for email preferences on user profile will be moved closer to the text to be more visible.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The checkboxes will be clearly visible for email preferences. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
There are not performance overhead created while meeting the requirements.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
== '''Proposed Solution''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black. &lt;br /&gt;
&lt;br /&gt;
Possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnaire.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
The score calculation was initially tested manually to verify the scenarios with multiple weights and rounds. &lt;br /&gt;
Also, existing spec for score calculation with a weighted average for both varying and non-varying rubric is covering this scenario.&lt;br /&gt;
The spec is written under /spec/models/assignment_spec.rb. The &amp;quot;score&amp;quot; function of the &amp;quot;model/assignment.rb&amp;quot; is called for online (on the fly score calculation).&lt;br /&gt;
&lt;br /&gt;
Following file has the spec for weighted average score calculation.&lt;br /&gt;
spec/models/assignment_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;This feature can be broken down into the following sub-cases:&lt;br /&gt;
&amp;lt;br&amp;gt;1. A submission can only be made before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;2. A review can edit a review only before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;3. Score calculation should be only done for latest submitted review&lt;br /&gt;
&lt;br /&gt;
Subcases:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
1. Spec to ensure submission can't be made after the deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
  it 'should not allow submission after deadline' do&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    # goto student_task page, which has link to &amp;quot;Your work&amp;quot;&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
    # the page will have text &amp;quot;Your work&amp;quot; but will be grayed&lt;br /&gt;
    expect(page).to have_content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    # the page will not have link to content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    expect{click_link &amp;quot;Your work&amp;quot;}.to raise_error(Capybara::ElementNotFound)&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
2. Spec to ensure review can be done only before deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
  it &amp;quot;should not be able to review work after deadline&amp;quot; do&lt;br /&gt;
    # The spec is written to reproduce following bug. &amp;quot;Others' work&amp;quot; link open after deadline passed&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
    # the page should have content, but after deadline passes it is displayed as gray&lt;br /&gt;
    # but there should not be any link attached to it&lt;br /&gt;
    expect(page).to have_content &amp;quot;Others' work&amp;quot;&lt;br /&gt;
    # this is the bug, even after deadline has passed, the link is still present&lt;br /&gt;
    # the ui comment in file views/student_task/view.html.erb says&lt;br /&gt;
    # &amp;lt;!--Akshay: Fix Issue 1218 - this link is disabled if assignment does not require any peer reviews--&amp;gt;&lt;br /&gt;
    # But the link seems to be open even after deadline passed.&lt;br /&gt;
    # Screenshot attached as part of wiki for E1975, Fall 2019&lt;br /&gt;
    expect(page).to have_link(&amp;quot;Others' work&amp;quot;, &amp;quot;/student_review/list?id=1&amp;quot;)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The test scenario is able to produce the following bug. At times the review link is open even after the deadline has passed. The bug is also shown/reproduced in the following screenshot for Program 1.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenShotReview.jpg]]&lt;br /&gt;
&lt;br /&gt;
3. Score calculation&lt;br /&gt;
&lt;br /&gt;
Score calculation for assignments for review is part of models/assignment.rb. The score calculation is handled by function &amp;quot;score&amp;quot; in this file and the testing is done for both multipart varying and non-varying rubric.&lt;br /&gt;
The respective spec is written in file spec/models/assignment_spec.rb. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
describe '#scores' do&lt;br /&gt;
    context 'when assignment is varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores in each round of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review1: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(true)&lt;br /&gt;
        allow(assignment).to receive(:num_review_rounds).and_return(1)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_responses_for_team_round).with(team, 1).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review1: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90.0}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    context 'when assignment is not varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(false)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_assessments_for).with(team).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Usecasenew.png]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
&lt;br /&gt;
New test cases will be added to test and validate the functionalities which are not working as expected. Also, the test cases for features that will be newly added to the system, will also be written. &lt;br /&gt;
Test cases will be added to these files:&lt;br /&gt;
&lt;br /&gt;
spec/features/post_deadline_review_submission_sepc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
spec/models/assignment_participant_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scenarios covered by the newly introduced test cases are as follows: &amp;lt;br&amp;gt;&lt;br /&gt;
1. Zero weighted scores with gray mark: rspec spec/model/assignment_participants.rb&amp;lt;br&amp;gt;&lt;br /&gt;
2. Non-zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
3. Review update: rspec  spec/features/post_deadline_review_submission_sepc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
'''Step 1'''. The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 2'''. The author submits the work done in expertiza.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 3'''. Peers check the submitted work and give reviews for it.&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Test 1: Zero weighted scores shown in gray.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
Zero weighted scores should be displayed in gray to the logged in user.&lt;br /&gt;
 &lt;br /&gt;
'''Test 2: Information button.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
'''Step 4'''. The mentor evaluates the final submission and give grades.&lt;br /&gt;
 &lt;br /&gt;
'''Test 3: Display non-zero weighted scores'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The scores shown for the assignment should be the weighted average of the scores.&lt;br /&gt;
The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
'''Step 5''': The author updates the work and re-submits it again.&lt;br /&gt;
 &lt;br /&gt;
'''Test 4: Review Update'''&amp;lt;br&amp;gt;&lt;br /&gt;
The Review should be reopened. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
If the review is not updated the previous review score has to be retained.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131339</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131339"/>
		<updated>2019-12-17T11:08:56Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Quality Requirements */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''Problem Statement''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Generalize_review.jpg]]&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Only the checkboxes for email preferences on user profile will be moved closer to the text to be more visible.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The checkboxes will be clearly visible for email preferences. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
== '''Proposed Solution''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black. &lt;br /&gt;
&lt;br /&gt;
Possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnaire.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
The score calculation was initially tested manually to verify the scenarios with multiple weights and rounds. &lt;br /&gt;
Also, existing spec for score calculation with a weighted average for both varying and non-varying rubric is covering this scenario.&lt;br /&gt;
The spec is written under /spec/models/assignment_spec.rb. The &amp;quot;score&amp;quot; function of the &amp;quot;model/assignment.rb&amp;quot; is called for online (on the fly score calculation).&lt;br /&gt;
&lt;br /&gt;
Following file has the spec for weighted average score calculation.&lt;br /&gt;
spec/models/assignment_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;This feature can be broken down into the following sub-cases:&lt;br /&gt;
&amp;lt;br&amp;gt;1. A submission can only be made before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;2. A review can edit a review only before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;3. Score calculation should be only done for latest submitted review&lt;br /&gt;
&lt;br /&gt;
Subcases:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
1. Spec to ensure submission can't be made after the deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
  it 'should not allow submission after deadline' do&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    # goto student_task page, which has link to &amp;quot;Your work&amp;quot;&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
    # the page will have text &amp;quot;Your work&amp;quot; but will be grayed&lt;br /&gt;
    expect(page).to have_content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    # the page will not have link to content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    expect{click_link &amp;quot;Your work&amp;quot;}.to raise_error(Capybara::ElementNotFound)&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
2. Spec to ensure review can be done only before deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
  it &amp;quot;should not be able to review work after deadline&amp;quot; do&lt;br /&gt;
    # The spec is written to reproduce following bug. &amp;quot;Others' work&amp;quot; link open after deadline passed&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
    # the page should have content, but after deadline passes it is displayed as gray&lt;br /&gt;
    # but there should not be any link attached to it&lt;br /&gt;
    expect(page).to have_content &amp;quot;Others' work&amp;quot;&lt;br /&gt;
    # this is the bug, even after deadline has passed, the link is still present&lt;br /&gt;
    # the ui comment in file views/student_task/view.html.erb says&lt;br /&gt;
    # &amp;lt;!--Akshay: Fix Issue 1218 - this link is disabled if assignment does not require any peer reviews--&amp;gt;&lt;br /&gt;
    # But the link seems to be open even after deadline passed.&lt;br /&gt;
    # Screenshot attached as part of wiki for E1975, Fall 2019&lt;br /&gt;
    expect(page).to have_link(&amp;quot;Others' work&amp;quot;, &amp;quot;/student_review/list?id=1&amp;quot;)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The test scenario is able to produce the following bug. At times the review link is open even after the deadline has passed. The bug is also shown/reproduced in the following screenshot for Program 1.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenShotReview.jpg]]&lt;br /&gt;
&lt;br /&gt;
3. Score calculation&lt;br /&gt;
&lt;br /&gt;
Score calculation for assignments for review is part of models/assignment.rb. The score calculation is handled by function &amp;quot;score&amp;quot; in this file and the testing is done for both multipart varying and non-varying rubric.&lt;br /&gt;
The respective spec is written in file spec/models/assignment_spec.rb. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
describe '#scores' do&lt;br /&gt;
    context 'when assignment is varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores in each round of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review1: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(true)&lt;br /&gt;
        allow(assignment).to receive(:num_review_rounds).and_return(1)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_responses_for_team_round).with(team, 1).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review1: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90.0}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    context 'when assignment is not varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(false)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_assessments_for).with(team).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Usecasenew.png]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
&lt;br /&gt;
New test cases will be added to test and validate the functionalities which are not working as expected. Also, the test cases for features that will be newly added to the system, will also be written. &lt;br /&gt;
Test cases will be added to these files:&lt;br /&gt;
&lt;br /&gt;
spec/features/post_deadline_review_submission_sepc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
spec/models/assignment_participant_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scenarios covered by the newly introduced test cases are as follows: &amp;lt;br&amp;gt;&lt;br /&gt;
1. Zero weighted scores with gray mark: rspec spec/model/assignment_participants.rb&amp;lt;br&amp;gt;&lt;br /&gt;
2. Non-zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
3. Review update: rspec  spec/features/post_deadline_review_submission_sepc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
'''Step 1'''. The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 2'''. The author submits the work done in expertiza.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 3'''. Peers check the submitted work and give reviews for it.&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Test 1: Zero weighted scores shown in gray.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
Zero weighted scores should be displayed in gray to the logged in user.&lt;br /&gt;
 &lt;br /&gt;
'''Test 2: Information button.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
'''Step 4'''. The mentor evaluates the final submission and give grades.&lt;br /&gt;
 &lt;br /&gt;
'''Test 3: Display non-zero weighted scores'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The scores shown for the assignment should be the weighted average of the scores.&lt;br /&gt;
The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
'''Step 5''': The author updates the work and re-submits it again.&lt;br /&gt;
 &lt;br /&gt;
'''Test 4: Review Update'''&amp;lt;br&amp;gt;&lt;br /&gt;
The Review should be reopened. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
If the review is not updated the previous review score has to be retained.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131338</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131338"/>
		<updated>2019-12-17T11:08:06Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Interface Requirement */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''Problem Statement''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Generalize_review.jpg]]&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Only the checkboxes for email preferences on user profile will be moved closer to the text to be more visible.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The new button will be clearly visible and is supposed to have a clear text to interpret its functionality. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
== '''Proposed Solution''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black. &lt;br /&gt;
&lt;br /&gt;
Possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnaire.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
The score calculation was initially tested manually to verify the scenarios with multiple weights and rounds. &lt;br /&gt;
Also, existing spec for score calculation with a weighted average for both varying and non-varying rubric is covering this scenario.&lt;br /&gt;
The spec is written under /spec/models/assignment_spec.rb. The &amp;quot;score&amp;quot; function of the &amp;quot;model/assignment.rb&amp;quot; is called for online (on the fly score calculation).&lt;br /&gt;
&lt;br /&gt;
Following file has the spec for weighted average score calculation.&lt;br /&gt;
spec/models/assignment_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;This feature can be broken down into the following sub-cases:&lt;br /&gt;
&amp;lt;br&amp;gt;1. A submission can only be made before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;2. A review can edit a review only before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;3. Score calculation should be only done for latest submitted review&lt;br /&gt;
&lt;br /&gt;
Subcases:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
1. Spec to ensure submission can't be made after the deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
  it 'should not allow submission after deadline' do&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    # goto student_task page, which has link to &amp;quot;Your work&amp;quot;&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
    # the page will have text &amp;quot;Your work&amp;quot; but will be grayed&lt;br /&gt;
    expect(page).to have_content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    # the page will not have link to content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    expect{click_link &amp;quot;Your work&amp;quot;}.to raise_error(Capybara::ElementNotFound)&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
2. Spec to ensure review can be done only before deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
  it &amp;quot;should not be able to review work after deadline&amp;quot; do&lt;br /&gt;
    # The spec is written to reproduce following bug. &amp;quot;Others' work&amp;quot; link open after deadline passed&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
    # the page should have content, but after deadline passes it is displayed as gray&lt;br /&gt;
    # but there should not be any link attached to it&lt;br /&gt;
    expect(page).to have_content &amp;quot;Others' work&amp;quot;&lt;br /&gt;
    # this is the bug, even after deadline has passed, the link is still present&lt;br /&gt;
    # the ui comment in file views/student_task/view.html.erb says&lt;br /&gt;
    # &amp;lt;!--Akshay: Fix Issue 1218 - this link is disabled if assignment does not require any peer reviews--&amp;gt;&lt;br /&gt;
    # But the link seems to be open even after deadline passed.&lt;br /&gt;
    # Screenshot attached as part of wiki for E1975, Fall 2019&lt;br /&gt;
    expect(page).to have_link(&amp;quot;Others' work&amp;quot;, &amp;quot;/student_review/list?id=1&amp;quot;)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The test scenario is able to produce the following bug. At times the review link is open even after the deadline has passed. The bug is also shown/reproduced in the following screenshot for Program 1.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenShotReview.jpg]]&lt;br /&gt;
&lt;br /&gt;
3. Score calculation&lt;br /&gt;
&lt;br /&gt;
Score calculation for assignments for review is part of models/assignment.rb. The score calculation is handled by function &amp;quot;score&amp;quot; in this file and the testing is done for both multipart varying and non-varying rubric.&lt;br /&gt;
The respective spec is written in file spec/models/assignment_spec.rb. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
describe '#scores' do&lt;br /&gt;
    context 'when assignment is varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores in each round of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review1: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(true)&lt;br /&gt;
        allow(assignment).to receive(:num_review_rounds).and_return(1)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_responses_for_team_round).with(team, 1).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review1: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90.0}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    context 'when assignment is not varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(false)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_assessments_for).with(team).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Usecasenew.png]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
&lt;br /&gt;
New test cases will be added to test and validate the functionalities which are not working as expected. Also, the test cases for features that will be newly added to the system, will also be written. &lt;br /&gt;
Test cases will be added to these files:&lt;br /&gt;
&lt;br /&gt;
spec/features/post_deadline_review_submission_sepc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
spec/models/assignment_participant_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scenarios covered by the newly introduced test cases are as follows: &amp;lt;br&amp;gt;&lt;br /&gt;
1. Zero weighted scores with gray mark: rspec spec/model/assignment_participants.rb&amp;lt;br&amp;gt;&lt;br /&gt;
2. Non-zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
3. Review update: rspec  spec/features/post_deadline_review_submission_sepc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
'''Step 1'''. The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 2'''. The author submits the work done in expertiza.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 3'''. Peers check the submitted work and give reviews for it.&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Test 1: Zero weighted scores shown in gray.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
Zero weighted scores should be displayed in gray to the logged in user.&lt;br /&gt;
 &lt;br /&gt;
'''Test 2: Information button.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
'''Step 4'''. The mentor evaluates the final submission and give grades.&lt;br /&gt;
 &lt;br /&gt;
'''Test 3: Display non-zero weighted scores'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The scores shown for the assignment should be the weighted average of the scores.&lt;br /&gt;
The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
'''Step 5''': The author updates the work and re-submits it again.&lt;br /&gt;
 &lt;br /&gt;
'''Test 4: Review Update'''&amp;lt;br&amp;gt;&lt;br /&gt;
The Review should be reopened. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
If the review is not updated the previous review score has to be retained.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131337</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131337"/>
		<updated>2019-12-17T11:05:37Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Interface Requirement */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''Problem Statement''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Generalize_review.jpg]]&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The new button will be clearly visible and is supposed to have a clear text to interpret its functionality. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
== '''Proposed Solution''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black. &lt;br /&gt;
&lt;br /&gt;
Possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnaire.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
The score calculation was initially tested manually to verify the scenarios with multiple weights and rounds. &lt;br /&gt;
Also, existing spec for score calculation with a weighted average for both varying and non-varying rubric is covering this scenario.&lt;br /&gt;
The spec is written under /spec/models/assignment_spec.rb. The &amp;quot;score&amp;quot; function of the &amp;quot;model/assignment.rb&amp;quot; is called for online (on the fly score calculation).&lt;br /&gt;
&lt;br /&gt;
Following file has the spec for weighted average score calculation.&lt;br /&gt;
spec/models/assignment_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;This feature can be broken down into the following sub-cases:&lt;br /&gt;
&amp;lt;br&amp;gt;1. A submission can only be made before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;2. A review can edit a review only before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;3. Score calculation should be only done for latest submitted review&lt;br /&gt;
&lt;br /&gt;
Subcases:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
1. Spec to ensure submission can't be made after the deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
  it 'should not allow submission after deadline' do&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    # goto student_task page, which has link to &amp;quot;Your work&amp;quot;&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
    # the page will have text &amp;quot;Your work&amp;quot; but will be grayed&lt;br /&gt;
    expect(page).to have_content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    # the page will not have link to content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    expect{click_link &amp;quot;Your work&amp;quot;}.to raise_error(Capybara::ElementNotFound)&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
2. Spec to ensure review can be done only before deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
  it &amp;quot;should not be able to review work after deadline&amp;quot; do&lt;br /&gt;
    # The spec is written to reproduce following bug. &amp;quot;Others' work&amp;quot; link open after deadline passed&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
    # the page should have content, but after deadline passes it is displayed as gray&lt;br /&gt;
    # but there should not be any link attached to it&lt;br /&gt;
    expect(page).to have_content &amp;quot;Others' work&amp;quot;&lt;br /&gt;
    # this is the bug, even after deadline has passed, the link is still present&lt;br /&gt;
    # the ui comment in file views/student_task/view.html.erb says&lt;br /&gt;
    # &amp;lt;!--Akshay: Fix Issue 1218 - this link is disabled if assignment does not require any peer reviews--&amp;gt;&lt;br /&gt;
    # But the link seems to be open even after deadline passed.&lt;br /&gt;
    # Screenshot attached as part of wiki for E1975, Fall 2019&lt;br /&gt;
    expect(page).to have_link(&amp;quot;Others' work&amp;quot;, &amp;quot;/student_review/list?id=1&amp;quot;)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The test scenario is able to produce the following bug. At times the review link is open even after the deadline has passed. The bug is also shown/reproduced in the following screenshot for Program 1.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenShotReview.jpg]]&lt;br /&gt;
&lt;br /&gt;
3. Score calculation&lt;br /&gt;
&lt;br /&gt;
Score calculation for assignments for review is part of models/assignment.rb. The score calculation is handled by function &amp;quot;score&amp;quot; in this file and the testing is done for both multipart varying and non-varying rubric.&lt;br /&gt;
The respective spec is written in file spec/models/assignment_spec.rb. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
describe '#scores' do&lt;br /&gt;
    context 'when assignment is varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores in each round of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review1: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(true)&lt;br /&gt;
        allow(assignment).to receive(:num_review_rounds).and_return(1)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_responses_for_team_round).with(team, 1).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review1: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90.0}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    context 'when assignment is not varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(false)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_assessments_for).with(team).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Usecasenew.png]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
&lt;br /&gt;
New test cases will be added to test and validate the functionalities which are not working as expected. Also, the test cases for features that will be newly added to the system, will also be written. &lt;br /&gt;
Test cases will be added to these files:&lt;br /&gt;
&lt;br /&gt;
spec/features/post_deadline_review_submission_sepc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
spec/models/assignment_participant_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scenarios covered by the newly introduced test cases are as follows: &amp;lt;br&amp;gt;&lt;br /&gt;
1. Zero weighted scores with gray mark: rspec spec/model/assignment_participants.rb&amp;lt;br&amp;gt;&lt;br /&gt;
2. Non-zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
3. Review update: rspec  spec/features/post_deadline_review_submission_sepc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
'''Step 1'''. The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 2'''. The author submits the work done in expertiza.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 3'''. Peers check the submitted work and give reviews for it.&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Test 1: Zero weighted scores shown in gray.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
Zero weighted scores should be displayed in gray to the logged in user.&lt;br /&gt;
 &lt;br /&gt;
'''Test 2: Information button.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
'''Step 4'''. The mentor evaluates the final submission and give grades.&lt;br /&gt;
 &lt;br /&gt;
'''Test 3: Display non-zero weighted scores'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The scores shown for the assignment should be the weighted average of the scores.&lt;br /&gt;
The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
'''Step 5''': The author updates the work and re-submits it again.&lt;br /&gt;
 &lt;br /&gt;
'''Test 4: Review Update'''&amp;lt;br&amp;gt;&lt;br /&gt;
The Review should be reopened. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
If the review is not updated the previous review score has to be retained.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131245</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131245"/>
		<updated>2019-12-15T22:41:07Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Rspec */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''Problem Statement''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Since this is a modification to the current submission page, there will be a new addition to the submission page. On the submission page, there will be a new button to notify reviewers when there is a submission which doesn’t contain a change to submission parameters like submission URL, but the deployment has been updated on the same URL. Other than that the interface remains the same for the score page, only functionalities will be tested for multi-part rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The new button will be clearly visible and is supposed to have a clear text to interpret its functionality. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
== '''Proposed Solution''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black. &lt;br /&gt;
&lt;br /&gt;
Possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnaire.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
The score calculation was initially tested manually to verify the scenarios with multiple weights and rounds. &lt;br /&gt;
Also, existing spec for score calculation with a weighted average for both varying and non-varying rubric is covering this scenario.&lt;br /&gt;
The spec is written under /spec/models/assignment_spec.rb. The &amp;quot;score&amp;quot; function of the &amp;quot;model/assignment.rb&amp;quot; is called for online (on the fly score calculation).&lt;br /&gt;
&lt;br /&gt;
Following file has the spec for weighted average score calculation.&lt;br /&gt;
spec/models/assignment_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;This feature can be broken down into the following sub-cases:&lt;br /&gt;
&amp;lt;br&amp;gt;1. A submission can only be made before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;2. A review can edit a review only before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;3. Score calculation should be only done for latest submitted review&lt;br /&gt;
&lt;br /&gt;
Subcases:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
1. Spec to ensure submission can't be made after the deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
it &amp;quot;should not allow submission after deadline&amp;quot; do&lt;br /&gt;
&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
&lt;br /&gt;
    # goto student_task page, which has link to &amp;quot;Your work&amp;quot;&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
&lt;br /&gt;
    # the page will have text &amp;quot;Your work&amp;quot; but will be grayed&lt;br /&gt;
    expect(page).to have_content &amp;quot;Your work&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    # the page will not have link to content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    expect{click_link &amp;quot;Your work&amp;quot;}.to raise_error(Capybara::ElementNotFound)&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
2. Spec to ensure review can be done only before deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
it &amp;quot;should not be able to review work after deadline&amp;quot; do&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    # The spec is written to reproduce following bug. &amp;quot;Others' work&amp;quot; link open after deadline passed&lt;br /&gt;
&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
&lt;br /&gt;
    # the page should have content, but after deadline passes it is displayed as gray&lt;br /&gt;
    # but there should not be any link attached to it&lt;br /&gt;
    expect(page).to have_content &amp;quot;Others' work&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    # this is the bug, even after deadline has passed, the link is still present&lt;br /&gt;
    # the ui comment in file views/student_task/view.html.erb says&lt;br /&gt;
    # &amp;lt;!--Akshay: Fix Issue 1218 - this link is disabled if assignment does not require any peer reviews--&amp;gt;&lt;br /&gt;
    # But the link seems to be open even after deadline passed.&lt;br /&gt;
    # Screenshot attached as part of wiki for E1975, Fall 2019&lt;br /&gt;
    expect(page).to have_link(&amp;quot;Others' work&amp;quot;, &amp;quot;/student_review/list?id=1&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The test scenario is able to produce the following bug. At times the review link is open even after the deadline has passed. The bug is also shown/reproduced in the following screenshot for Program 1.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ReviewBug.png]]&lt;br /&gt;
&lt;br /&gt;
3. Score calculation&lt;br /&gt;
&lt;br /&gt;
Score calculation for assignments for review is part of models/assignment.rb. The score calculation is handled by function &amp;quot;score&amp;quot; in this file and the testing is done for both multipart varying and non-varying rubric.&lt;br /&gt;
The respective spec is written in file spec/models/assignment_spec.rb. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
describe '#scores' do&lt;br /&gt;
    context 'when assignment is varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores in each round of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review1: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(true)&lt;br /&gt;
        allow(assignment).to receive(:num_review_rounds).and_return(1)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_responses_for_team_round).with(team, 1).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review1: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90.0}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    context 'when assignment is not varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(false)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_assessments_for).with(team).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:E1975_CHECKBOX_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Usecasenew.png]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
&lt;br /&gt;
New test cases will be added to test and validate the functionalities which are not working as expected. Also, the test cases for features that will be newly added to the system, will also be written. &lt;br /&gt;
Test cases will be added to these files:&lt;br /&gt;
&lt;br /&gt;
spec/features/post_deadline_review_submission_sepc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
spec/models/assignment_participant_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scenarios covered by the newly introduced test cases are as follows: &amp;lt;br&amp;gt;&lt;br /&gt;
1. Zero weighted scores with gray mark: rspec spec/model/assignment_participants.rb&amp;lt;br&amp;gt;&lt;br /&gt;
2. Non-zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
3. Review update: rspec  spec/features/post_deadline_review_submission_sepc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
'''Step 1'''. The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 2'''. The author submits the work done in expertiza.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 3'''. Peers check the submitted work and give reviews for it.&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Test 1: Zero weighted scores shown in gray.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
Zero weighted scores should be displayed in gray to the logged in user.&lt;br /&gt;
 &lt;br /&gt;
'''Test 2: Information button.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
'''Step 4'''. The mentor evaluates the final submission and give grades.&lt;br /&gt;
 &lt;br /&gt;
'''Test 3: Display non-zero weighted scores'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The scores shown for the assignment should be the weighted average of the scores.&lt;br /&gt;
The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
'''Step 5''': The author updates the work and re-submits it again.&lt;br /&gt;
 &lt;br /&gt;
'''Test 4: Review Update'''&amp;lt;br&amp;gt;&lt;br /&gt;
The Review should be reopened. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
If the review is not updated the previous review score has to be retained.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131244</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131244"/>
		<updated>2019-12-15T22:35:02Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Non-zero weighted scores */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''Problem Statement''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Since this is a modification to the current submission page, there will be a new addition to the submission page. On the submission page, there will be a new button to notify reviewers when there is a submission which doesn’t contain a change to submission parameters like submission URL, but the deployment has been updated on the same URL. Other than that the interface remains the same for the score page, only functionalities will be tested for multi-part rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The new button will be clearly visible and is supposed to have a clear text to interpret its functionality. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
== '''Proposed Solution''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black. &lt;br /&gt;
&lt;br /&gt;
Possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnaire.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
The score calculation was initially tested manually to verify the scenarios with multiple weights and rounds. &lt;br /&gt;
Also, existing spec for score calculation with a weighted average for both varying and non-varying rubric is covering this scenario.&lt;br /&gt;
The spec is written under /spec/models/assignment_spec.rb. The &amp;quot;score&amp;quot; function of the &amp;quot;model/assignment.rb&amp;quot; is called for online (on the fly score calculation).&lt;br /&gt;
&lt;br /&gt;
Following file has the spec for weighted average score calculation.&lt;br /&gt;
spec/models/assignment_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;This feature can be broken down into the following sub-cases:&lt;br /&gt;
&amp;lt;br&amp;gt;1. A submission can only be made before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;2. A review can edit a review only before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;3. Score calculation should be only done for latest submitted review&lt;br /&gt;
&lt;br /&gt;
Subcases:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
1. Spec to ensure submission can't be made after the deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
it &amp;quot;should not allow submission after deadline&amp;quot; do&lt;br /&gt;
&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
&lt;br /&gt;
    # goto student_task page, which has link to &amp;quot;Your work&amp;quot;&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
&lt;br /&gt;
    # the page will have text &amp;quot;Your work&amp;quot; but will be grayed&lt;br /&gt;
    expect(page).to have_content &amp;quot;Your work&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    # the page will not have link to content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    expect{click_link &amp;quot;Your work&amp;quot;}.to raise_error(Capybara::ElementNotFound)&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
2. Spec to ensure review can be done only before deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
it &amp;quot;should not be able to review work after deadline&amp;quot; do&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    # The spec is written to reproduce following bug. &amp;quot;Others' work&amp;quot; link open after deadline passed&lt;br /&gt;
&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
&lt;br /&gt;
    # the page should have content, but after deadline passes it is displayed as gray&lt;br /&gt;
    # but there should not be any link attached to it&lt;br /&gt;
    expect(page).to have_content &amp;quot;Others' work&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    # this is the bug, even after deadline has passed, the link is still present&lt;br /&gt;
    # the ui comment in file views/student_task/view.html.erb says&lt;br /&gt;
    # &amp;lt;!--Akshay: Fix Issue 1218 - this link is disabled if assignment does not require any peer reviews--&amp;gt;&lt;br /&gt;
    # But the link seems to be open even after deadline passed.&lt;br /&gt;
    # Screenshot attached as part of wiki for E1975, Fall 2019&lt;br /&gt;
    expect(page).to have_link(&amp;quot;Others' work&amp;quot;, &amp;quot;/student_review/list?id=1&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The test scenario is able to produce the following bug. At times the review link is open even after the deadline has passed. The bug is also shown/reproduced in the following screenshot for Program 1.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ReviewBug.png]]&lt;br /&gt;
&lt;br /&gt;
3. Score calculation&lt;br /&gt;
&lt;br /&gt;
Score calculation for assignments for review is part of models/assignment.rb. The score calculation is handled by function &amp;quot;score&amp;quot; in this file and the testing is done for both multipart varying and non-varying rubric.&lt;br /&gt;
The respective spec is written in file spec/models/assignment_spec.rb. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
describe '#scores' do&lt;br /&gt;
    context 'when assignment is varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores in each round of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review1: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(true)&lt;br /&gt;
        allow(assignment).to receive(:num_review_rounds).and_return(1)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_responses_for_team_round).with(team, 1).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review1: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90.0}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    context 'when assignment is not varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(false)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_assessments_for).with(team).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:E1975_CHECKBOX_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Usecasenew.png]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
&lt;br /&gt;
New test cases will be added to test and validate the functionalities which are not working as expected. Also the test cases for features that will be newly added to the system, will also be written. Test cases will be added to these files:&lt;br /&gt;
spec/models/on_the_fly_calc_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
spec/models/questionnaire_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scenarios covered by the newly introduced test cases are as follows: &amp;lt;br&amp;gt;&lt;br /&gt;
1. Zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
2. Non-zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
3. Review update &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
'''Step 1'''. The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 2'''. The author submits the work done in expertiza.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 3'''. Peers check the submitted work and give reviews for it.&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Test 1: Zero weighted scores shown in gray.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
Zero weighted scores should be displayed in gray to the logged in user.&lt;br /&gt;
 &lt;br /&gt;
'''Test 2: Information button.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
'''Step 4'''. The mentor evaluates the final submission and give grades.&lt;br /&gt;
 &lt;br /&gt;
'''Test 3: Display non-zero weighted scores'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The scores shown for the assignment should be the weighted average of the scores.&lt;br /&gt;
The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
'''Step 5''': The author updates the work and re-submits it again.&lt;br /&gt;
 &lt;br /&gt;
'''Test 4: Review Update'''&amp;lt;br&amp;gt;&lt;br /&gt;
The Review should be reopened. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
If the review is not updated the previous review score has to be retained.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131242</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131242"/>
		<updated>2019-12-15T21:45:30Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Review Update */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''Problem Statement''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Since this is a modification to the current submission page, there will be a new addition to the submission page. On the submission page, there will be a new button to notify reviewers when there is a submission which doesn’t contain a change to submission parameters like submission URL, but the deployment has been updated on the same URL. Other than that the interface remains the same for the score page, only functionalities will be tested for multi-part rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The new button will be clearly visible and is supposed to have a clear text to interpret its functionality. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
== '''Proposed Solution''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black. &lt;br /&gt;
&lt;br /&gt;
Possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnaire.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
We'll first write the test cases by adding test cases to the '''spec/models/assignment_spec.rb''' &lt;br /&gt;
&lt;br /&gt;
Also, we'll test this manually by computing the score and if there's a bug then below are the possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
spec/models/assignment_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;This feature can be broken down into the following sub-cases:&lt;br /&gt;
&amp;lt;br&amp;gt;1. A submission can only be made before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;2. A review can edit a review only before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;3. Score calculation should be only done for latest submitted review&lt;br /&gt;
&lt;br /&gt;
Subcases:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
1. Spec to ensure submission can't be made after the deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
it &amp;quot;should not allow submission after deadline&amp;quot; do&lt;br /&gt;
&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
&lt;br /&gt;
    # goto student_task page, which has link to &amp;quot;Your work&amp;quot;&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
&lt;br /&gt;
    # the page will have text &amp;quot;Your work&amp;quot; but will be grayed&lt;br /&gt;
    expect(page).to have_content &amp;quot;Your work&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    # the page will not have link to content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    expect{click_link &amp;quot;Your work&amp;quot;}.to raise_error(Capybara::ElementNotFound)&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
2. Spec to ensure review can be done only before deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
it &amp;quot;should not be able to review work after deadline&amp;quot; do&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    # The spec is written to reproduce following bug. &amp;quot;Others' work&amp;quot; link open after deadline passed&lt;br /&gt;
&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
&lt;br /&gt;
    # the page should have content, but after deadline passes it is displayed as gray&lt;br /&gt;
    # but there should not be any link attached to it&lt;br /&gt;
    expect(page).to have_content &amp;quot;Others' work&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    # this is the bug, even after deadline has passed, the link is still present&lt;br /&gt;
    # the ui comment in file views/student_task/view.html.erb says&lt;br /&gt;
    # &amp;lt;!--Akshay: Fix Issue 1218 - this link is disabled if assignment does not require any peer reviews--&amp;gt;&lt;br /&gt;
    # But the link seems to be open even after deadline passed.&lt;br /&gt;
    # Screenshot attached as part of wiki for E1975, Fall 2019&lt;br /&gt;
    expect(page).to have_link(&amp;quot;Others' work&amp;quot;, &amp;quot;/student_review/list?id=1&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The test scenario is able to produce the following bug. At times the review link is open even after the deadline has passed. The bug is also shown/reproduced in the following screenshot for Program 1.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ReviewBug.png]]&lt;br /&gt;
&lt;br /&gt;
3. Score calculation&lt;br /&gt;
&lt;br /&gt;
Score calculation for assignments for review is part of models/assignment.rb. The score calculation is handled by function &amp;quot;score&amp;quot; in this file and the testing is done for both multipart varying and non-varying rubric.&lt;br /&gt;
The respective spec is written in file spec/models/assignment_spec.rb. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
describe '#scores' do&lt;br /&gt;
    context 'when assignment is varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores in each round of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review1: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(true)&lt;br /&gt;
        allow(assignment).to receive(:num_review_rounds).and_return(1)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_responses_for_team_round).with(team, 1).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review1: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90.0}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    context 'when assignment is not varying rubric by round assignment' do&lt;br /&gt;
      it 'calculates scores of each team in current assignment' do&lt;br /&gt;
        allow(participant).to receive(:scores).with(review: [question]).and_return(98)&lt;br /&gt;
        allow(assignment).to receive(:varying_rubrics_by_round?).and_return(false)&lt;br /&gt;
        allow(ReviewResponseMap).to receive(:get_assessments_for).with(team).and_return([response])&lt;br /&gt;
        allow(Answer).to receive(:compute_scores).with([response], [question]).and_return(max: 95, min: 88, avg: 90)&lt;br /&gt;
        expect(assignment.scores(review: [question]).inspect).to eq(&amp;quot;{:participants=&amp;gt;{:\&amp;quot;1\&amp;quot;=&amp;gt;98}, :teams=&amp;gt;{:\&amp;quot;0\&amp;quot;=&amp;gt;{:team=&amp;gt;#&amp;lt;AssignmentTeam id: 1, &amp;quot;\&lt;br /&gt;
          &amp;quot;name: \&amp;quot;no team\&amp;quot;, parent_id: 1, type: \&amp;quot;AssignmentTeam\&amp;quot;, comments_for_advertisement: nil, advertise_for_partner: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;submitted_hyperlinks: \&amp;quot;---\\n- https://www.expertiza.ncsu.edu\&amp;quot;, directory_num: 0, grade_for_submission: nil, &amp;quot;\&lt;br /&gt;
          &amp;quot;comment_for_submission: nil&amp;gt;, :scores=&amp;gt;{:max=&amp;gt;95, :min=&amp;gt;88, :avg=&amp;gt;90}}}}&amp;quot;)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:E1975_CHECKBOX_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Usecasenew.png]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
&lt;br /&gt;
New test cases will be added to test and validate the functionalities which are not working as expected. Also the test cases for features that will be newly added to the system, will also be written. Test cases will be added to these files:&lt;br /&gt;
spec/models/on_the_fly_calc_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
spec/models/questionnaire_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scenarios covered by the newly introduced test cases are as follows: &amp;lt;br&amp;gt;&lt;br /&gt;
1. Zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
2. Non-zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
3. Review update &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
'''Step 1'''. The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 2'''. The author submits the work done in expertiza.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 3'''. Peers check the submitted work and give reviews for it.&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Test 1: Zero weighted scores shown in gray.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
Zero weighted scores should be displayed in gray to the logged in user.&lt;br /&gt;
 &lt;br /&gt;
'''Test 2: Information button.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
'''Step 4'''. The mentor evaluates the final submission and give grades.&lt;br /&gt;
 &lt;br /&gt;
'''Test 3: Display non-zero weighted scores'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The scores shown for the assignment should be the weighted average of the scores.&lt;br /&gt;
The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
'''Step 5''': The author updates the work and re-submits it again.&lt;br /&gt;
 &lt;br /&gt;
'''Test 4: Review Update'''&amp;lt;br&amp;gt;&lt;br /&gt;
The Review should be reopened. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
If the review is not updated the previous review score has to be retained.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131241</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131241"/>
		<updated>2019-12-15T21:40:35Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Review Update */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''Problem Statement''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Since this is a modification to the current submission page, there will be a new addition to the submission page. On the submission page, there will be a new button to notify reviewers when there is a submission which doesn’t contain a change to submission parameters like submission URL, but the deployment has been updated on the same URL. Other than that the interface remains the same for the score page, only functionalities will be tested for multi-part rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The new button will be clearly visible and is supposed to have a clear text to interpret its functionality. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
== '''Proposed Solution''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black. &lt;br /&gt;
&lt;br /&gt;
Possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnaire.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
We'll first write the test cases by adding test cases to the '''spec/models/assignment_spec.rb''' &lt;br /&gt;
&lt;br /&gt;
Also, we'll test this manually by computing the score and if there's a bug then below are the possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
spec/models/assignment_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;This feature can be broken down into the following sub-cases:&lt;br /&gt;
&amp;lt;br&amp;gt;1. A submission can only be made before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;2. A review can edit a review only before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;3. Score calculation should be only done for latest submitted review&lt;br /&gt;
&lt;br /&gt;
Subcases:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
1. Spec to ensure submission can't be made after the deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
it &amp;quot;should not allow submission after deadline&amp;quot; do&lt;br /&gt;
&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
&lt;br /&gt;
    # goto student_task page, which has link to &amp;quot;Your work&amp;quot;&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
&lt;br /&gt;
    # the page will have text &amp;quot;Your work&amp;quot; but will be grayed&lt;br /&gt;
    expect(page).to have_content &amp;quot;Your work&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    # the page will not have link to content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    expect{click_link &amp;quot;Your work&amp;quot;}.to raise_error(Capybara::ElementNotFound)&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
2. Spec to ensure review can be done only before deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
it &amp;quot;should not be able to review work after deadline&amp;quot; do&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    # The spec is written to reproduce following bug. &amp;quot;Others' work&amp;quot; link open after deadline passed&lt;br /&gt;
&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
&lt;br /&gt;
    # the page should have content, but after deadline passes it is displayed as gray&lt;br /&gt;
    # but there should not be any link attached to it&lt;br /&gt;
    expect(page).to have_content &amp;quot;Others' work&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    # this is the bug, even after deadline has passed, the link is still present&lt;br /&gt;
    # the ui comment in file views/student_task/view.html.erb says&lt;br /&gt;
    # &amp;lt;!--Akshay: Fix Issue 1218 - this link is disabled if assignment does not require any peer reviews--&amp;gt;&lt;br /&gt;
    # But the link seems to be open even after deadline passed.&lt;br /&gt;
    # Screenshot attached as part of wiki for E1975, Fall 2019&lt;br /&gt;
    expect(page).to have_link(&amp;quot;Others' work&amp;quot;, &amp;quot;/student_review/list?id=1&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The test scenario is able to produce the following bug. At times the review link is open even after the deadline has passed. The bug is also shown/reproduced in the following screenshot for Program 1.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ReviewBug.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:E1975_CHECKBOX_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Usecasenew.png]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
&lt;br /&gt;
New test cases will be added to test and validate the functionalities which are not working as expected. Also the test cases for features that will be newly added to the system, will also be written. Test cases will be added to these files:&lt;br /&gt;
spec/models/on_the_fly_calc_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
spec/models/questionnaire_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scenarios covered by the newly introduced test cases are as follows: &amp;lt;br&amp;gt;&lt;br /&gt;
1. Zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
2. Non-zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
3. Review update &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
'''Step 1'''. The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 2'''. The author submits the work done in expertiza.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 3'''. Peers check the submitted work and give reviews for it.&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Test 1: Zero weighted scores shown in gray.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
Zero weighted scores should be displayed in gray to the logged in user.&lt;br /&gt;
 &lt;br /&gt;
'''Test 2: Information button.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
'''Step 4'''. The mentor evaluates the final submission and give grades.&lt;br /&gt;
 &lt;br /&gt;
'''Test 3: Display non-zero weighted scores'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The scores shown for the assignment should be the weighted average of the scores.&lt;br /&gt;
The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
'''Step 5''': The author updates the work and re-submits it again.&lt;br /&gt;
 &lt;br /&gt;
'''Test 4: Review Update'''&amp;lt;br&amp;gt;&lt;br /&gt;
The Review should be reopened. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
If the review is not updated the previous review score has to be retained.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131240</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131240"/>
		<updated>2019-12-15T21:40:00Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Review Update */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''Problem Statement''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Since this is a modification to the current submission page, there will be a new addition to the submission page. On the submission page, there will be a new button to notify reviewers when there is a submission which doesn’t contain a change to submission parameters like submission URL, but the deployment has been updated on the same URL. Other than that the interface remains the same for the score page, only functionalities will be tested for multi-part rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The new button will be clearly visible and is supposed to have a clear text to interpret its functionality. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
== '''Proposed Solution''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black. &lt;br /&gt;
&lt;br /&gt;
Possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnaire.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
We'll first write the test cases by adding test cases to the '''spec/models/assignment_spec.rb''' &lt;br /&gt;
&lt;br /&gt;
Also, we'll test this manually by computing the score and if there's a bug then below are the possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
spec/models/assignment_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;This feature can be broken down into the following sub-cases:&lt;br /&gt;
&amp;lt;br&amp;gt;1. A submission can only be made before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;2. A review can edit a review only before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;3. Score calculation should be only done for latest submitted review&lt;br /&gt;
&lt;br /&gt;
Subcases:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
1. Spec to ensure submission can't be made after the deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
it &amp;quot;should not allow submission after deadline&amp;quot; do&lt;br /&gt;
&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
&lt;br /&gt;
    # goto student_task page, which has link to &amp;quot;Your work&amp;quot;&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
&lt;br /&gt;
    # the page will have text &amp;quot;Your work&amp;quot; but will be grayed&lt;br /&gt;
    expect(page).to have_content &amp;quot;Your work&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    # the page will not have link to content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    expect{click_link &amp;quot;Your work&amp;quot;}.to raise_error(Capybara::ElementNotFound)&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
2. Spec to ensure review can be done only before deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
it &amp;quot;should not be able to review work after deadline&amp;quot; do&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    # The spec is written to reproduce following bug. &amp;quot;Others' work&amp;quot; link open after deadline passed&lt;br /&gt;
&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
&lt;br /&gt;
    # the page should have content, but after deadline passes it is displayed as gray&lt;br /&gt;
    # but there should not be any link attached to it&lt;br /&gt;
    expect(page).to have_content &amp;quot;Others' work&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    # this is the bug, even after deadline has passed, the link is still present&lt;br /&gt;
    # the ui comment in file views/student_task/view.html.erb says&lt;br /&gt;
    # &amp;lt;!--Akshay: Fix Issue 1218 - this link is disabled if assignment does not require any peer reviews--&amp;gt;&lt;br /&gt;
    # But the link seems to be open even after deadline passed.&lt;br /&gt;
    # Screenshot attached as part of wiki for E1975, Fall 2019&lt;br /&gt;
    expect(page).to have_link(&amp;quot;Others' work&amp;quot;, &amp;quot;/student_review/list?id=1&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The test scenario is able to produce the following bug. At times the review link is open even after the deadline has passed. The bug is also shown/reproduced in the following screenshot for Program 1.&lt;br /&gt;
&lt;br /&gt;
[[File:ReviewBug.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:E1975_CHECKBOX_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Usecasenew.png]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
&lt;br /&gt;
New test cases will be added to test and validate the functionalities which are not working as expected. Also the test cases for features that will be newly added to the system, will also be written. Test cases will be added to these files:&lt;br /&gt;
spec/models/on_the_fly_calc_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
spec/models/questionnaire_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scenarios covered by the newly introduced test cases are as follows: &amp;lt;br&amp;gt;&lt;br /&gt;
1. Zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
2. Non-zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
3. Review update &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
'''Step 1'''. The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 2'''. The author submits the work done in expertiza.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 3'''. Peers check the submitted work and give reviews for it.&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Test 1: Zero weighted scores shown in gray.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
Zero weighted scores should be displayed in gray to the logged in user.&lt;br /&gt;
 &lt;br /&gt;
'''Test 2: Information button.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
'''Step 4'''. The mentor evaluates the final submission and give grades.&lt;br /&gt;
 &lt;br /&gt;
'''Test 3: Display non-zero weighted scores'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The scores shown for the assignment should be the weighted average of the scores.&lt;br /&gt;
The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
'''Step 5''': The author updates the work and re-submits it again.&lt;br /&gt;
 &lt;br /&gt;
'''Test 4: Review Update'''&amp;lt;br&amp;gt;&lt;br /&gt;
The Review should be reopened. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
If the review is not updated the previous review score has to be retained.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:ReviewBug.png&amp;diff=131239</id>
		<title>File:ReviewBug.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:ReviewBug.png&amp;diff=131239"/>
		<updated>2019-12-15T21:38:39Z</updated>

		<summary type="html">&lt;p&gt;Atrived: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131238</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131238"/>
		<updated>2019-12-15T21:37:41Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Review Update */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''Problem Statement''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Since this is a modification to the current submission page, there will be a new addition to the submission page. On the submission page, there will be a new button to notify reviewers when there is a submission which doesn’t contain a change to submission parameters like submission URL, but the deployment has been updated on the same URL. Other than that the interface remains the same for the score page, only functionalities will be tested for multi-part rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The new button will be clearly visible and is supposed to have a clear text to interpret its functionality. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
== '''Proposed Solution''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black. &lt;br /&gt;
&lt;br /&gt;
Possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnaire.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
We'll first write the test cases by adding test cases to the '''spec/models/assignment_spec.rb''' &lt;br /&gt;
&lt;br /&gt;
Also, we'll test this manually by computing the score and if there's a bug then below are the possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
spec/models/assignment_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;This feature can be broken down into the following sub-cases:&lt;br /&gt;
&amp;lt;br&amp;gt;1. A submission can only be made before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;2. A review can edit a review only before deadline&lt;br /&gt;
&amp;lt;br&amp;gt;3. Score calculation should be only done for latest submitted review&lt;br /&gt;
&lt;br /&gt;
Subcases:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
1. Spec to ensure submission can't be made after the deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
it &amp;quot;should not allow submission after deadline&amp;quot; do&lt;br /&gt;
&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
&lt;br /&gt;
    # goto student_task page, which has link to &amp;quot;Your work&amp;quot;&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
&lt;br /&gt;
    # the page will have text &amp;quot;Your work&amp;quot; but will be grayed&lt;br /&gt;
    expect(page).to have_content &amp;quot;Your work&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    # the page will not have link to content &amp;quot;Your work&amp;quot;&lt;br /&gt;
    expect{click_link &amp;quot;Your work&amp;quot;}.to raise_error(Capybara::ElementNotFound)&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
2. Spec to ensure review can be done only before deadline&lt;br /&gt;
&lt;br /&gt;
File : spec/features/post_deadline_review_submission_spec.rb&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
it &amp;quot;should not be able to review work after deadline&amp;quot; do&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    # The spec is written to reproduce following bug. &amp;quot;Others' work&amp;quot; link open after deadline passed&lt;br /&gt;
&lt;br /&gt;
    user = User.find_by(name: &amp;quot;student2065&amp;quot;)&lt;br /&gt;
    stub_current_user(user, user.role.name, user.role)&lt;br /&gt;
    visit 'student_task/view?id=1'&lt;br /&gt;
&lt;br /&gt;
    # the page should have content, but after deadline passes it is displayed as gray&lt;br /&gt;
    # but there should not be any link attached to it&lt;br /&gt;
    expect(page).to have_content &amp;quot;Others' work&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    # this is the bug, even after deadline has passed, the link is still present&lt;br /&gt;
    # the ui comment in file views/student_task/view.html.erb says&lt;br /&gt;
    # &amp;lt;!--Akshay: Fix Issue 1218 - this link is disabled if assignment does not require any peer reviews--&amp;gt;&lt;br /&gt;
    # But the link seems to be open even after deadline passed.&lt;br /&gt;
    # Screenshot attached as part of wiki for E1975, Fall 2019&lt;br /&gt;
    expect(page).to have_link(&amp;quot;Others' work&amp;quot;, &amp;quot;/student_review/list?id=1&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The test scenario is able to produce the following bug. At times the review link is open even after the deadline has passed. The bug is also shown/reproduced in the following screenshot for Program 1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:E1975_CHECKBOX_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Usecasenew.png]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
&lt;br /&gt;
New test cases will be added to test and validate the functionalities which are not working as expected. Also the test cases for features that will be newly added to the system, will also be written. Test cases will be added to these files:&lt;br /&gt;
spec/models/on_the_fly_calc_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
spec/models/questionnaire_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scenarios covered by the newly introduced test cases are as follows: &amp;lt;br&amp;gt;&lt;br /&gt;
1. Zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
2. Non-zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
3. Review update &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
'''Step 1'''. The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 2'''. The author submits the work done in expertiza.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 3'''. Peers check the submitted work and give reviews for it.&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Test 1: Zero weighted scores shown in gray.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
Zero weighted scores should be displayed in gray to the logged in user.&lt;br /&gt;
 &lt;br /&gt;
'''Test 2: Information button.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
'''Step 4'''. The mentor evaluates the final submission and give grades.&lt;br /&gt;
 &lt;br /&gt;
'''Test 3: Display non-zero weighted scores'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The scores shown for the assignment should be the weighted average of the scores.&lt;br /&gt;
The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
'''Step 5''': The author updates the work and re-submits it again.&lt;br /&gt;
 &lt;br /&gt;
'''Test 4: Review Update'''&amp;lt;br&amp;gt;&lt;br /&gt;
The Review should be reopened. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
If the review is not updated the previous review score has to be retained.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131230</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131230"/>
		<updated>2019-12-15T21:18:39Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Non-zero weighted scores */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''Problem Statement''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Since this is a modification to the current submission page, there will be a new addition to the submission page. On the submission page, there will be a new button to notify reviewers when there is a submission which doesn’t contain a change to submission parameters like submission URL, but the deployment has been updated on the same URL. Other than that the interface remains the same for the score page, only functionalities will be tested for multi-part rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The new button will be clearly visible and is supposed to have a clear text to interpret its functionality. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
== '''Proposed Solution''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black. &lt;br /&gt;
&lt;br /&gt;
Possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnaire.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
We'll first write the test cases by adding test cases to the '''spec/models/assignment_spec.rb''' &lt;br /&gt;
&lt;br /&gt;
Also, we'll test this manually by computing the score and if there's a bug then below are the possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
spec/models/assignment_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:E1975_CHECKBOX_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Usecasenew.png]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
&lt;br /&gt;
New test cases will be added to test and validate the functionalities which are not working as expected. Also the test cases for features that will be newly added to the system, will also be written. Test cases will be added to these files:&lt;br /&gt;
spec/models/on_the_fly_calc_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
spec/models/questionnaire_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scenarios covered by the newly introduced test cases are as follows: &amp;lt;br&amp;gt;&lt;br /&gt;
1. Zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
2. Non-zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
3. Review update &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
'''Step 1'''. The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 2'''. The author submits the work done in expertiza.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 3'''. Peers check the submitted work and give reviews for it.&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Test 1: Zero weighted scores shown in gray.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
Zero weighted scores should be displayed in gray to the logged in user.&lt;br /&gt;
 &lt;br /&gt;
'''Test 2: Information button.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
'''Step 4'''. The mentor evaluates the final submission and give grades.&lt;br /&gt;
 &lt;br /&gt;
'''Test 3: Display non-zero weighted scores'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The scores shown for the assignment should be the weighted average of the scores.&lt;br /&gt;
The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
'''Step 5''': The author updates the work and re-submits it again.&lt;br /&gt;
 &lt;br /&gt;
'''Test 4: Review Update'''&amp;lt;br&amp;gt;&lt;br /&gt;
The Review should be reopened. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
If the review is not updated the previous review score has to be retained.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131229</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=131229"/>
		<updated>2019-12-15T20:33:04Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Review Update */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''Problem Statement''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Since this is a modification to the current submission page, there will be a new addition to the submission page. On the submission page, there will be a new button to notify reviewers when there is a submission which doesn’t contain a change to submission parameters like submission URL, but the deployment has been updated on the same URL. Other than that the interface remains the same for the score page, only functionalities will be tested for multi-part rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The new button will be clearly visible and is supposed to have a clear text to interpret its functionality. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
== '''Proposed Solution''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black. &lt;br /&gt;
&lt;br /&gt;
Possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnaire.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
We'll first write the test cases by adding test cases to the '''spec/models/on_the_fly_calc_spec.rb''' &lt;br /&gt;
&lt;br /&gt;
Also, we'll test this manually by computing the score and if there's a bug then below are the possible files to be modified. &lt;br /&gt;
&lt;br /&gt;
models/on_the_fly_calc.rb&amp;lt;br&amp;gt;&lt;br /&gt;
models/questionnarie.rb&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:E1975_CHECKBOX_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Usecasenew.png]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
&lt;br /&gt;
New test cases will be added to test and validate the functionalities which are not working as expected. Also the test cases for features that will be newly added to the system, will also be written. Test cases will be added to these files:&lt;br /&gt;
spec/models/on_the_fly_calc_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
spec/models/questionnaire_spec.rb&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scenarios covered by the newly introduced test cases are as follows: &amp;lt;br&amp;gt;&lt;br /&gt;
1. Zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
2. Non-zero weighted scores &amp;lt;br&amp;gt;&lt;br /&gt;
3. Review update &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
'''Step 1'''. The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 2'''. The author submits the work done in expertiza.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Step 3'''. Peers check the submitted work and give reviews for it.&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Test 1: Zero weighted scores shown in gray.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
Zero weighted scores should be displayed in gray to the logged in user.&lt;br /&gt;
 &lt;br /&gt;
'''Test 2: Information button.'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
'''Step 4'''. The mentor evaluates the final submission and give grades.&lt;br /&gt;
 &lt;br /&gt;
'''Test 3: Display non-zero weighted scores'''&amp;lt;br&amp;gt;&lt;br /&gt;
Login as an author in expertiza and go to scores page to view the feedback.&lt;br /&gt;
The scores shown for the assignment should be the weighted average of the scores.&lt;br /&gt;
The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
'''Step 5''': The author updates the work and re-submits it again.&lt;br /&gt;
 &lt;br /&gt;
'''Test 4: Review Update'''&amp;lt;br&amp;gt;&lt;br /&gt;
The Review should be reopened. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
If the review is not updated the previous review score has to be retained.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=128694</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=128694"/>
		<updated>2019-11-11T20:48:47Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Test Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''About Expertiza''' ==&lt;br /&gt;
&lt;br /&gt;
Expertiza is an open source project based on Ruby on Rails framework and the code is available on Github. Expertiza allows the instructor to create new assignments as well as edit new or existing assignments. Instructors can also create a list of topics the students can sign up for and specify deadlines for completion of various tasks. Students can form teams in Expertiza to work on various projects and assignments as well as peer review other students' submissions. Expertiza supports submission across various document types, including the URLs Wiki pages.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Documentation=====&lt;br /&gt;
The documents that will be generated for this project are the design document, the actual files that we edit or create, and a ReadMe that explains what we did and how to use the modified product.&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''System Requirements''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Since this is a modification to the current submission page, there will be a new addition to the submission page. On the submission page, there will be a new button to notify reviewers when there is a submission which doesn’t contain a change to submission parameters like submission URL, but the deployment has been updated on the same URL. Other than that the interface remains the same for the score page, only functionalities will be tested for multi-part rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The new button will be clearly visible and is supposed to have a clear text to interpret its functionality. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Portability Requirements=====&lt;br /&gt;
All of the changes for this enhancement to Expertiza should not interfere with the platform portability in any way.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== '''Design''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. This functionality already exists in the existing system. However, In some submissions, there’s no change in the submitted URL’s so there’s no way to let the reviewers know that the author has made some changes in the project. We’ll add a notify button for the Author in the UI which will trigger an email to the reviewers. This will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review&lt;br /&gt;
&lt;br /&gt;
'''Sending an email when the submission parameters don’t change​:'''&lt;br /&gt;
&lt;br /&gt;
A new button will be added on the submission page. The button will enable the author to notify all the reviewers to revisit the submission. We could also add a custom message in the&lt;br /&gt;
mail body so that the authors know the specific reason as to why they received this particular mail.&lt;br /&gt;
&lt;br /&gt;
=====Profile Email Preference=====&lt;br /&gt;
The checkboxes are moved to the left corner. As the layout of the options have used table layout. So, we'll increase the width of the the column which includes checkboxes. We'll be using the percentage instead of fixed value to give the width, as this would adjust the column size based on the screen size. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:E1975_CHECKBOX_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
We have added new testcases for the new pieces of code that were added by us.&lt;br /&gt;
&lt;br /&gt;
=====Test Plan for Email Preferences=====&lt;br /&gt;
&lt;br /&gt;
.    The email preferences checkboxes at present appear sticking to the rightmost border of the page. By following the requirements the checkboxes will be made available closer to the text fields to make them feel more user-friendly.&lt;br /&gt;
The changes can be tested post code changes.&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&lt;br /&gt;
&lt;br /&gt;
·   	Login as an author and go to scores page to view the feedback.&lt;br /&gt;
&lt;br /&gt;
Test – Zero weighted scores shown in gray.&lt;br /&gt;
&lt;br /&gt;
Test – Zero weighted scores shown in gray.&lt;br /&gt;
 &lt;br /&gt;
·   	Zero weighted scores should be displayed in gray to the user.&lt;br /&gt;
 &lt;br /&gt;
Test – Information button.&lt;br /&gt;
 &lt;br /&gt;
·   	The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
Test – Display non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
·   	The scores shown for the assignment should be the weighted average of the scores, if any non-zero scores are available&lt;br /&gt;
·   	The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
Test – Update review score.&lt;br /&gt;
·   	If author updates the submission before the end of current round. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
·   	If the review is not updated the previous review score has to be retained.&lt;br /&gt;
 &lt;br /&gt;
Test – Multipart part rubric.&lt;br /&gt;
·   	The functionality should work for multipart rubric.&lt;br /&gt;
·   	The scores calculated have to be accurate.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=128691</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=128691"/>
		<updated>2019-11-11T20:47:35Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Manual Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''About Expertiza''' ==&lt;br /&gt;
&lt;br /&gt;
Expertiza is an open source project based on Ruby on Rails framework and the code is available on Github. Expertiza allows the instructor to create new assignments as well as edit new or existing assignments. Instructors can also create a list of topics the students can sign up for and specify deadlines for completion of various tasks. Students can form teams in Expertiza to work on various projects and assignments as well as peer review other students' submissions. Expertiza supports submission across various document types, including the URLs Wiki pages.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Documentation=====&lt;br /&gt;
The documents that will be generated for this project are the design document, the actual files that we edit or create, and a ReadMe that explains what we did and how to use the modified product.&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''System Requirements''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Since this is a modification to the current submission page, there will be a new addition to the submission page. On the submission page, there will be a new button to notify reviewers when there is a submission which doesn’t contain a change to submission parameters like submission URL, but the deployment has been updated on the same URL. Other than that the interface remains the same for the score page, only functionalities will be tested for multi-part rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The new button will be clearly visible and is supposed to have a clear text to interpret its functionality. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Portability Requirements=====&lt;br /&gt;
All of the changes for this enhancement to Expertiza should not interfere with the platform portability in any way.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== '''Design''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. This functionality already exists in the existing system. However, In some submissions, there’s no change in the submitted URL’s so there’s no way to let the reviewers know that the author has made some changes in the project. We’ll add a notify button for the Author in the UI which will trigger an email to the reviewers. This will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review&lt;br /&gt;
&lt;br /&gt;
'''Sending an email when the submission parameters don’t change​:'''&lt;br /&gt;
&lt;br /&gt;
A new button will be added on the submission page. The button will enable the author to notify all the reviewers to revisit the submission. We could also add a custom message in the&lt;br /&gt;
mail body so that the authors know the specific reason as to why they received this particular mail.&lt;br /&gt;
&lt;br /&gt;
=====Profile Email Preference=====&lt;br /&gt;
The checkboxes are moved to the left corner&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:E1975_CHECKBOX_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
We have added new testcases for the new pieces of code that were added by us.&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&lt;br /&gt;
&lt;br /&gt;
·   	Login as an author and go to scores page to view the feedback.&lt;br /&gt;
&lt;br /&gt;
Test – Zero weighted scores shown in gray.&lt;br /&gt;
&lt;br /&gt;
Test – Zero weighted scores shown in gray.&lt;br /&gt;
 &lt;br /&gt;
·   	Zero weighted scores should be displayed in gray to the user.&lt;br /&gt;
 &lt;br /&gt;
Test – Information button.&lt;br /&gt;
 &lt;br /&gt;
·   	The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
Test – Display non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
·   	The scores shown for the assignment should be the weighted average of the scores, if any non-zero scores are available&lt;br /&gt;
·   	The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
Test – Update review score.&lt;br /&gt;
·   	If author updates the submission before the end of current round. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
·   	If the review is not updated the previous review score has to be retained.&lt;br /&gt;
 &lt;br /&gt;
Test – Multipart part rubric.&lt;br /&gt;
·   	The functionality should work for multipart rubric.&lt;br /&gt;
·   	The scores calculated have to be accurate.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=128690</id>
		<title>CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1975._Generalize_Review_Versioning&amp;diff=128690"/>
		<updated>2019-11-11T20:47:10Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Manual Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== '''About Expertiza''' ==&lt;br /&gt;
&lt;br /&gt;
Expertiza is an open source project based on Ruby on Rails framework and the code is available on Github. Expertiza allows the instructor to create new assignments as well as edit new or existing assignments. Instructors can also create a list of topics the students can sign up for and specify deadlines for completion of various tasks. Students can form teams in Expertiza to work on various projects and assignments as well as peer review other students' submissions. Expertiza supports submission across various document types, including the URLs Wiki pages.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Introduction''' ==&lt;br /&gt;
Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.&lt;br /&gt;
&lt;br /&gt;
=====Purpose=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Scope=====&lt;br /&gt;
This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.&lt;br /&gt;
&lt;br /&gt;
=====Background=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric&lt;br /&gt;
&lt;br /&gt;
=====Documentation=====&lt;br /&gt;
The documents that will be generated for this project are the design document, the actual files that we edit or create, and a ReadMe that explains what we did and how to use the modified product.&lt;br /&gt;
&lt;br /&gt;
=====Standards=====&lt;br /&gt;
The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:&lt;br /&gt;
&lt;br /&gt;
● https://github.com/rubocop-hq/rails-style-guide &amp;lt;br&amp;gt;&lt;br /&gt;
● http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.&lt;br /&gt;
&lt;br /&gt;
== '''System Requirements''' ==&lt;br /&gt;
=====Problem=====&lt;br /&gt;
The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.&lt;br /&gt;
&lt;br /&gt;
'''Profile Email Preference''':&lt;br /&gt;
User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is how scores are calculated:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::&amp;lt;br&amp;gt;&lt;br /&gt;
    80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​'''However, the previous review score that was given in the current round will count until the reviewer updates the submitted review'''. '''This functionality doesn’t work for multipart rubrics.''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.&lt;br /&gt;
&lt;br /&gt;
=====Interface Requirement=====&lt;br /&gt;
The overall user interface would remain the same as of the current system. Since this is a modification to the current submission page, there will be a new addition to the submission page. On the submission page, there will be a new button to notify reviewers when there is a submission which doesn’t contain a change to submission parameters like submission URL, but the deployment has been updated on the same URL. Other than that the interface remains the same for the score page, only functionalities will be tested for multi-part rubrics.&lt;br /&gt;
&lt;br /&gt;
=====Quality Requirements=====&lt;br /&gt;
The interface changes are minimal and very easy to adapt. The new button will be clearly visible and is supposed to have a clear text to interpret its functionality. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.&lt;br /&gt;
&lt;br /&gt;
=====Portability Requirements=====&lt;br /&gt;
All of the changes for this enhancement to Expertiza should not interfere with the platform portability in any way.&lt;br /&gt;
&lt;br /&gt;
=====Performance Requirements=====&lt;br /&gt;
Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.&lt;br /&gt;
&lt;br /&gt;
=====Assumptions=====&lt;br /&gt;
The functionality should work in case the assignment rubric is not multi-part.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== '''Design''' ==&lt;br /&gt;
=====Zero-weighted scores=====&lt;br /&gt;
If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)&lt;br /&gt;
&lt;br /&gt;
===== Non-zero weighted scores=====&lt;br /&gt;
If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%&lt;br /&gt;
&lt;br /&gt;
=====Review Update=====&lt;br /&gt;
If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. This functionality already exists in the existing system. However, In some submissions, there’s no change in the submitted URL’s so there’s no way to let the reviewers know that the author has made some changes in the project. We’ll add a notify button for the Author in the UI which will trigger an email to the reviewers. This will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review&lt;br /&gt;
&lt;br /&gt;
'''Sending an email when the submission parameters don’t change​:'''&lt;br /&gt;
&lt;br /&gt;
A new button will be added on the submission page. The button will enable the author to notify all the reviewers to revisit the submission. We could also add a custom message in the&lt;br /&gt;
mail body so that the authors know the specific reason as to why they received this particular mail.&lt;br /&gt;
&lt;br /&gt;
=====Profile Email Preference=====&lt;br /&gt;
The checkboxes are moved to the left corner&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:E1975_CHECKBOX_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
=='''Test Plan'''==&lt;br /&gt;
&lt;br /&gt;
=====Rspec=====&lt;br /&gt;
We have added new testcases for the new pieces of code that were added by us.&lt;br /&gt;
&lt;br /&gt;
=====Manual Testing=====&lt;br /&gt;
&lt;br /&gt;
Test Plan for Email Preferences&lt;br /&gt;
&lt;br /&gt;
.    The email preferences checkboxes at present appear sticking to the rightmost border of the page. By following the requirements the checkboxes will be made available closer to the text fields to make them feel more user-friendly.&lt;br /&gt;
The changes can be tested post code changes.&lt;br /&gt;
&lt;br /&gt;
The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.&lt;br /&gt;
&lt;br /&gt;
·   	Login as an author and go to scores page to view the feedback.&lt;br /&gt;
&lt;br /&gt;
Test – Zero weighted scores shown in gray.&lt;br /&gt;
&lt;br /&gt;
Test – Zero weighted scores shown in gray.&lt;br /&gt;
 &lt;br /&gt;
·   	Zero weighted scores should be displayed in gray to the user.&lt;br /&gt;
 &lt;br /&gt;
Test – Information button.&lt;br /&gt;
 &lt;br /&gt;
·   	The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)&lt;br /&gt;
 &lt;br /&gt;
Test – Display non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
·   	The scores shown for the assignment should be the weighted average of the scores, if any non-zero scores are available&lt;br /&gt;
·   	The zero weighted scores have to be replaced with non-zero weighted scores.&lt;br /&gt;
 &lt;br /&gt;
Test – Update review score.&lt;br /&gt;
·   	If author updates the submission before the end of current round. The reviewer should be able to go and update the submitted review.&lt;br /&gt;
·   	If the review is not updated the previous review score has to be retained.&lt;br /&gt;
 &lt;br /&gt;
Test – Multipart part rubric.&lt;br /&gt;
·   	The functionality should work for multipart rubric.&lt;br /&gt;
·   	The scores calculated have to be accurate.&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=127949</id>
		<title>CSC/ECE 517 Fall 2019 - E1940. Improving email notification</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=127949"/>
		<updated>2019-11-07T03:48:25Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Issue 3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=E1940 Improving e-mail notification=&lt;br /&gt;
&lt;br /&gt;
===Brief Introduction===&lt;br /&gt;
 &lt;br /&gt;
* E1940 Improving e-mail notification.&lt;br /&gt;
&lt;br /&gt;
* The forked git repository for this project can be found [https://github.com/dheerajmakam/expertiza.git]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The following tasks were accomplished in this project:&lt;br /&gt;
&lt;br /&gt;
*Issue1: Send new account welcome email to user, when imported from CSV through assignment page.&lt;br /&gt;
*Issue2: Don't send email to reviewers for a new submission after review deadline has passed.&lt;br /&gt;
*Issue3: Adding relevant links to reminder emails.&lt;br /&gt;
&lt;br /&gt;
====Issue 1 (717) - New user email====&lt;br /&gt;
&lt;br /&gt;
'''app/models/assignment_participant.rb'''&lt;br /&gt;
&lt;br /&gt;
* Call method to send mail after user imported successfully.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    password = user.reset_password&lt;br /&gt;
    MailerHelper.send_mail_to_user(user, &amp;quot;Your Expertiza account has been created.&amp;quot;, &amp;quot;user_welcome&amp;quot;, password).deliver&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue2 (345)- No submission email to reviewer after deadline====&lt;br /&gt;
*'''Before fixing this issue, we had to write the logic to send emails to reviewers on submissions.'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''app/controllers/submitted_content_controller.rb'''&lt;br /&gt;
*Added the logic to check for last review date to the function '''submit_hyperlink'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
      # get current date and time&lt;br /&gt;
      cur_date = Time.now&lt;br /&gt;
      # get the last due date for the review&lt;br /&gt;
      max_due_date = DueDate.where(parent_id: @participant.assignment.id,deadline_type_id: 2).maximum(&amp;quot;due_at&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
      # send mail only if the last due date has not passed&lt;br /&gt;
      if cur_date &amp;lt;= max_due_date&lt;br /&gt;
        email_submission_reviewers(@participant)&lt;br /&gt;
      end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Function to identify the reviewers and send mails.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def email_submission_reviewers(participant)&lt;br /&gt;
    if participant.reviewers.length != 0&lt;br /&gt;
      participant.reviewers.each do |reviewer|&lt;br /&gt;
        rev_res_map = ReviewResponseMap.where(['reviewer_id = ? and reviewee_id = ?', reviewer.id, participant.team.id]).first&lt;br /&gt;
        all_responses = Response.where(:map_id =&amp;gt; rev_res_map.id).order(&amp;quot;updated_at DESC&amp;quot;).first&lt;br /&gt;
&lt;br /&gt;
        user = User.find(reviewer.user_id)&lt;br /&gt;
&lt;br /&gt;
        if user.email_on_submission?&lt;br /&gt;
          MailerHelper.submission_mail_to_reviewr(user,&lt;br /&gt;
                                             &amp;quot;New submission available to review.&amp;quot;,&lt;br /&gt;
                                             &amp;quot;update&amp;quot;).deliver&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/helpers/mailer_helper.rb'''&lt;br /&gt;
&lt;br /&gt;
*Helper function to mail reviewers&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def self.submission_mail_to_reviewr(user, subject, mail_partial)&lt;br /&gt;
    Mailer.notify_reviewer_for_new_submission ({&lt;br /&gt;
        to: user.email,&lt;br /&gt;
        subject: subject,&lt;br /&gt;
        body: {&lt;br /&gt;
            user: user,&lt;br /&gt;
            #first_name: ApplicationHelper.get_user_first_name(user),&lt;br /&gt;
            message: &amp;quot;&amp;quot;,&lt;br /&gt;
            partial_name: mail_partial&lt;br /&gt;
        }&lt;br /&gt;
    })&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mailer.rb'''&lt;br /&gt;
&lt;br /&gt;
*Mailer function to send the mail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def notify_reviewer_for_new_submission(defn)&lt;br /&gt;
    @partial_name = defn[:body][:partial_name]&lt;br /&gt;
    @user = defn[:body][:user]&lt;br /&gt;
    #@first_name = defn[:body][:first_name]&lt;br /&gt;
    @message = defn[:body][:message]&lt;br /&gt;
    mail(subject: defn[:subject],&lt;br /&gt;
         to: defn[:to])&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/views/mailer/notify_reviewer_for_new_submission.erb'''&lt;br /&gt;
&lt;br /&gt;
*Email template for the mail&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;head&amp;gt;&lt;br /&gt;
  &amp;lt;meta content='text/html; charset=UTF-8' http-equiv='Content-Type' /&amp;gt;&lt;br /&gt;
&amp;lt;/head&amp;gt;&lt;br /&gt;
&amp;lt;body&amp;gt;&lt;br /&gt;
&amp;lt;%= render :partial =&amp;gt; 'mailer/partials/'+@partial_name+'_html' %&amp;gt;&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;/body&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue3 (111)- Adding relevant links to reminder emails.====&lt;br /&gt;
&lt;br /&gt;
*Modified function email_remainder to add functionality.&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mail_worker.rb'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.assignment_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      user = User.where({email: mail}).first&lt;br /&gt;
      participant_assignment_id = Participant.where(user_id: user.id, parent_id: self.assignment_id).id&lt;br /&gt;
&lt;br /&gt;
      link_to_destination = &amp;quot;Please follow the lilnk: http://expertiza.ncsu.edu/student_task/view?id=#{participant_assignment_id}&amp;quot;&lt;br /&gt;
      body = body + link_to_destination + &amp;quot; Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
      @mail = Mailer.delayed_message(bcc: mail, subject: subject, body: body)&lt;br /&gt;
      @mail.deliver_now&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    @mail = Mailer.delayed_message(bcc: emails, subject: subject, body: body)&lt;br /&gt;
    @mail.deliver_now&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Test Plan===&lt;br /&gt;
&lt;br /&gt;
====Issue 1====&lt;br /&gt;
&lt;br /&gt;
Step 1: Navigate to Manage --&amp;gt; Assignment page.&lt;br /&gt;
&lt;br /&gt;
Step 2: Click on add participants for any of the assignments.&lt;br /&gt;
&lt;br /&gt;
Step 3: Click &amp;quot;Import course participants&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Step 4: Choose a csv file to be imported (follow the format given on the website).&lt;br /&gt;
&lt;br /&gt;
Step 5: The users mentioned in the csv file and don't exist on Expertiza should get a new user email.&lt;br /&gt;
&lt;br /&gt;
Step 6: To check e-mail is received or not, log in with following credentials:  username [ 'expertiza.development@gmail.com' ] password [ 'qwer@1234' ].&lt;br /&gt;
&lt;br /&gt;
====Issue 2====&lt;br /&gt;
&lt;br /&gt;
Step 1: Create new assignment [Manage --&amp;gt; Assignment --&amp;gt; + Button (to create new assignment)]&lt;br /&gt;
&lt;br /&gt;
Step 2: Fill the details for the assignments.&lt;br /&gt;
&lt;br /&gt;
Step 3: Navigate to due dates.&lt;br /&gt;
&lt;br /&gt;
Step 4: Change the number of review rounds to 2.&lt;br /&gt;
&lt;br /&gt;
Step 5: Select &amp;quot;Yes&amp;quot; in the dropdown for review allowed during submission and select &amp;quot;Yes&amp;quot; for submission during the review.&lt;br /&gt;
&lt;br /&gt;
Step 6: Add two users to the assignment(author and reviewer).&lt;br /&gt;
&lt;br /&gt;
Step 7: Log in with some user credentials (author credential).&lt;br /&gt;
&lt;br /&gt;
Step 8: Make a new submission to this assignment.&lt;br /&gt;
&lt;br /&gt;
Step 9: Log in with another user (reviewer).&lt;br /&gt;
&lt;br /&gt;
Step 10: Submit a review of the assignment submission.&lt;br /&gt;
&lt;br /&gt;
Step 11: Login as an author again. &lt;br /&gt;
&lt;br /&gt;
Step 12: Edit the submission.&lt;br /&gt;
&lt;br /&gt;
Step 13: After this check the mailbox of the reviewer [development mail for development].&lt;br /&gt;
&lt;br /&gt;
Step 14: Reviewer should get the mail to re-review the work.&lt;br /&gt;
&lt;br /&gt;
Step 15: Change the due date to some date and time which has passed.&lt;br /&gt;
&lt;br /&gt;
Step 16: Now making a new submission from the author account should not send a re-review mail to the reviewer. [Repeat steps 7-15].&lt;br /&gt;
&lt;br /&gt;
====Issue 3====&lt;br /&gt;
&lt;br /&gt;
This test requires an approaching deadline scenario. Since the reminder mail goes through as a sidekiq background, the part of this issue was to fix the email link.&lt;br /&gt;
&lt;br /&gt;
Create a new scenario with an approaching deadline. The email is sent already, we have added the link which was missing, directing users to visit the required page.&lt;br /&gt;
&lt;br /&gt;
===Team Member Details===&lt;br /&gt;
&lt;br /&gt;
Team members :&lt;br /&gt;
* Adarsh Trivedi (atrived)&lt;br /&gt;
* Dheeraj Makam (drmakam)&lt;br /&gt;
* Prachi Sheoran (psheora)&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=127948</id>
		<title>CSC/ECE 517 Fall 2019 - E1940. Improving email notification</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=127948"/>
		<updated>2019-11-07T03:46:20Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Issue 2 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=E1940 Improving e-mail notification=&lt;br /&gt;
&lt;br /&gt;
===Brief Introduction===&lt;br /&gt;
 &lt;br /&gt;
* E1940 Improving e-mail notification.&lt;br /&gt;
&lt;br /&gt;
* The forked git repository for this project can be found [https://github.com/dheerajmakam/expertiza.git]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The following tasks were accomplished in this project:&lt;br /&gt;
&lt;br /&gt;
*Issue1: Send new account welcome email to user, when imported from CSV through assignment page.&lt;br /&gt;
*Issue2: Don't send email to reviewers for a new submission after review deadline has passed.&lt;br /&gt;
*Issue3: Adding relevant links to reminder emails.&lt;br /&gt;
&lt;br /&gt;
====Issue 1 (717) - New user email====&lt;br /&gt;
&lt;br /&gt;
'''app/models/assignment_participant.rb'''&lt;br /&gt;
&lt;br /&gt;
* Call method to send mail after user imported successfully.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    password = user.reset_password&lt;br /&gt;
    MailerHelper.send_mail_to_user(user, &amp;quot;Your Expertiza account has been created.&amp;quot;, &amp;quot;user_welcome&amp;quot;, password).deliver&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue2 (345)- No submission email to reviewer after deadline====&lt;br /&gt;
*'''Before fixing this issue, we had to write the logic to send emails to reviewers on submissions.'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''app/controllers/submitted_content_controller.rb'''&lt;br /&gt;
*Added the logic to check for last review date to the function '''submit_hyperlink'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
      # get current date and time&lt;br /&gt;
      cur_date = Time.now&lt;br /&gt;
      # get the last due date for the review&lt;br /&gt;
      max_due_date = DueDate.where(parent_id: @participant.assignment.id,deadline_type_id: 2).maximum(&amp;quot;due_at&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
      # send mail only if the last due date has not passed&lt;br /&gt;
      if cur_date &amp;lt;= max_due_date&lt;br /&gt;
        email_submission_reviewers(@participant)&lt;br /&gt;
      end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Function to identify the reviewers and send mails.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def email_submission_reviewers(participant)&lt;br /&gt;
    if participant.reviewers.length != 0&lt;br /&gt;
      participant.reviewers.each do |reviewer|&lt;br /&gt;
        rev_res_map = ReviewResponseMap.where(['reviewer_id = ? and reviewee_id = ?', reviewer.id, participant.team.id]).first&lt;br /&gt;
        all_responses = Response.where(:map_id =&amp;gt; rev_res_map.id).order(&amp;quot;updated_at DESC&amp;quot;).first&lt;br /&gt;
&lt;br /&gt;
        user = User.find(reviewer.user_id)&lt;br /&gt;
&lt;br /&gt;
        if user.email_on_submission?&lt;br /&gt;
          MailerHelper.submission_mail_to_reviewr(user,&lt;br /&gt;
                                             &amp;quot;New submission available to review.&amp;quot;,&lt;br /&gt;
                                             &amp;quot;update&amp;quot;).deliver&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/helpers/mailer_helper.rb'''&lt;br /&gt;
&lt;br /&gt;
*Helper function to mail reviewers&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def self.submission_mail_to_reviewr(user, subject, mail_partial)&lt;br /&gt;
    Mailer.notify_reviewer_for_new_submission ({&lt;br /&gt;
        to: user.email,&lt;br /&gt;
        subject: subject,&lt;br /&gt;
        body: {&lt;br /&gt;
            user: user,&lt;br /&gt;
            #first_name: ApplicationHelper.get_user_first_name(user),&lt;br /&gt;
            message: &amp;quot;&amp;quot;,&lt;br /&gt;
            partial_name: mail_partial&lt;br /&gt;
        }&lt;br /&gt;
    })&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mailer.rb'''&lt;br /&gt;
&lt;br /&gt;
*Mailer function to send the mail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def notify_reviewer_for_new_submission(defn)&lt;br /&gt;
    @partial_name = defn[:body][:partial_name]&lt;br /&gt;
    @user = defn[:body][:user]&lt;br /&gt;
    #@first_name = defn[:body][:first_name]&lt;br /&gt;
    @message = defn[:body][:message]&lt;br /&gt;
    mail(subject: defn[:subject],&lt;br /&gt;
         to: defn[:to])&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/views/mailer/notify_reviewer_for_new_submission.erb'''&lt;br /&gt;
&lt;br /&gt;
*Email template for the mail&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;head&amp;gt;&lt;br /&gt;
  &amp;lt;meta content='text/html; charset=UTF-8' http-equiv='Content-Type' /&amp;gt;&lt;br /&gt;
&amp;lt;/head&amp;gt;&lt;br /&gt;
&amp;lt;body&amp;gt;&lt;br /&gt;
&amp;lt;%= render :partial =&amp;gt; 'mailer/partials/'+@partial_name+'_html' %&amp;gt;&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;/body&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue3 (111)- Adding relevant links to reminder emails.====&lt;br /&gt;
&lt;br /&gt;
*Modified function email_remainder to add functionality.&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mail_worker.rb'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.assignment_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      user = User.where({email: mail}).first&lt;br /&gt;
      participant_assignment_id = Participant.where(user_id: user.id, parent_id: self.assignment_id).id&lt;br /&gt;
&lt;br /&gt;
      link_to_destination = &amp;quot;Please follow the lilnk: http://expertiza.ncsu.edu/student_task/view?id=#{participant_assignment_id}&amp;quot;&lt;br /&gt;
      body = body + link_to_destination + &amp;quot; Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
      @mail = Mailer.delayed_message(bcc: mail, subject: subject, body: body)&lt;br /&gt;
      @mail.deliver_now&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    @mail = Mailer.delayed_message(bcc: emails, subject: subject, body: body)&lt;br /&gt;
    @mail.deliver_now&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Test Plan===&lt;br /&gt;
&lt;br /&gt;
====Issue 1====&lt;br /&gt;
&lt;br /&gt;
Step 1: Navigate to Manage --&amp;gt; Assignment page.&lt;br /&gt;
&lt;br /&gt;
Step 2: Click on add participants for any of the assignments.&lt;br /&gt;
&lt;br /&gt;
Step 3: Click &amp;quot;Import course participants&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Step 4: Choose a csv file to be imported (follow the format given on the website).&lt;br /&gt;
&lt;br /&gt;
Step 5: The users mentioned in the csv file and don't exist on Expertiza should get a new user email.&lt;br /&gt;
&lt;br /&gt;
Step 6: To check e-mail is received or not, log in with following credentials:  username [ 'expertiza.development@gmail.com' ] password [ 'qwer@1234' ].&lt;br /&gt;
&lt;br /&gt;
====Issue 2====&lt;br /&gt;
&lt;br /&gt;
Step 1: Create new assignment [Manage --&amp;gt; Assignment --&amp;gt; + Button (to create new assignment)]&lt;br /&gt;
&lt;br /&gt;
Step 2: Fill the details for the assignments.&lt;br /&gt;
&lt;br /&gt;
Step 3: Navigate to due dates.&lt;br /&gt;
&lt;br /&gt;
Step 4: Change the number of review rounds to 2.&lt;br /&gt;
&lt;br /&gt;
Step 5: Select &amp;quot;Yes&amp;quot; in the dropdown for review allowed during submission and select &amp;quot;Yes&amp;quot; for submission during the review.&lt;br /&gt;
&lt;br /&gt;
Step 6: Add two users to the assignment(author and reviewer).&lt;br /&gt;
&lt;br /&gt;
Step 7: Log in with some user credentials (author credential).&lt;br /&gt;
&lt;br /&gt;
Step 8: Make a new submission to this assignment.&lt;br /&gt;
&lt;br /&gt;
Step 9: Log in with another user (reviewer).&lt;br /&gt;
&lt;br /&gt;
Step 10: Submit a review of the assignment submission.&lt;br /&gt;
&lt;br /&gt;
Step 11: Login as an author again. &lt;br /&gt;
&lt;br /&gt;
Step 12: Edit the submission.&lt;br /&gt;
&lt;br /&gt;
Step 13: After this check the mailbox of the reviewer [development mail for development].&lt;br /&gt;
&lt;br /&gt;
Step 14: Reviewer should get the mail to re-review the work.&lt;br /&gt;
&lt;br /&gt;
Step 15: Change the due date to some date and time which has passed.&lt;br /&gt;
&lt;br /&gt;
Step 16: Now making a new submission from the author account should not send a re-review mail to the reviewer. [Repeat steps 7-15].&lt;br /&gt;
&lt;br /&gt;
====Issue 3====&lt;br /&gt;
&lt;br /&gt;
This test requires an approaching deadline scenario. Since the reminder mail goes through as a sidekiq background, the part of this issue was to fix the email link.&lt;br /&gt;
&lt;br /&gt;
Create a new scenario with an approaching deadline. The email is sent already, just the link is fixed.&lt;br /&gt;
&lt;br /&gt;
===Team Member Details===&lt;br /&gt;
&lt;br /&gt;
Team members :&lt;br /&gt;
* Adarsh Trivedi (atrived)&lt;br /&gt;
* Dheeraj Makam (drmakam)&lt;br /&gt;
* Prachi Sheoran (psheora)&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=127941</id>
		<title>CSC/ECE 517 Fall 2019 - E1940. Improving email notification</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=127941"/>
		<updated>2019-11-07T03:40:22Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Issue 2 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=E1940 Improving e-mail notification=&lt;br /&gt;
&lt;br /&gt;
===Brief Introduction===&lt;br /&gt;
 &lt;br /&gt;
* E1940 Improving e-mail notification.&lt;br /&gt;
&lt;br /&gt;
* The forked git repository for this project can be found [https://github.com/dheerajmakam/expertiza.git]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The following tasks were accomplished in this project:&lt;br /&gt;
&lt;br /&gt;
*Issue1: Send new account welcome email to user, when imported from CSV through assignment page.&lt;br /&gt;
*Issue2: Don't send email to reviewers for a new submission after review deadline has passed.&lt;br /&gt;
*Issue3: Adding relevant links to reminder emails.&lt;br /&gt;
&lt;br /&gt;
====Issue 1 (717) - New user email====&lt;br /&gt;
&lt;br /&gt;
'''app/models/assignment_participant.rb'''&lt;br /&gt;
&lt;br /&gt;
* Call method to send mail after user imported successfully.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    password = user.reset_password&lt;br /&gt;
    MailerHelper.send_mail_to_user(user, &amp;quot;Your Expertiza account has been created.&amp;quot;, &amp;quot;user_welcome&amp;quot;, password).deliver&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue2 (345)- No submission email to reviewer after deadline====&lt;br /&gt;
*'''Before fixing this issue, we had to write the logic to send emails to reviewers on submissions.'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''app/controllers/submitted_content_controller.rb'''&lt;br /&gt;
*Added the logic to check for last review date to the function '''submit_hyperlink'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
      # get current date and time&lt;br /&gt;
      cur_date = Time.now&lt;br /&gt;
      # get the last due date for the review&lt;br /&gt;
      max_due_date = DueDate.where(parent_id: @participant.assignment.id,deadline_type_id: 2).maximum(&amp;quot;due_at&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
      # send mail only if the last due date has not passed&lt;br /&gt;
      if cur_date &amp;lt;= max_due_date&lt;br /&gt;
        email_submission_reviewers(@participant)&lt;br /&gt;
      end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Function to identify the reviewers and send mails.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def email_submission_reviewers(participant)&lt;br /&gt;
    if participant.reviewers.length != 0&lt;br /&gt;
      participant.reviewers.each do |reviewer|&lt;br /&gt;
        rev_res_map = ReviewResponseMap.where(['reviewer_id = ? and reviewee_id = ?', reviewer.id, participant.team.id]).first&lt;br /&gt;
        all_responses = Response.where(:map_id =&amp;gt; rev_res_map.id).order(&amp;quot;updated_at DESC&amp;quot;).first&lt;br /&gt;
&lt;br /&gt;
        user = User.find(reviewer.user_id)&lt;br /&gt;
&lt;br /&gt;
        if user.email_on_submission?&lt;br /&gt;
          MailerHelper.submission_mail_to_reviewr(user,&lt;br /&gt;
                                             &amp;quot;New submission available to review.&amp;quot;,&lt;br /&gt;
                                             &amp;quot;update&amp;quot;).deliver&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/helpers/mailer_helper.rb'''&lt;br /&gt;
&lt;br /&gt;
*Helper function to mail reviewers&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def self.submission_mail_to_reviewr(user, subject, mail_partial)&lt;br /&gt;
    Mailer.notify_reviewer_for_new_submission ({&lt;br /&gt;
        to: user.email,&lt;br /&gt;
        subject: subject,&lt;br /&gt;
        body: {&lt;br /&gt;
            user: user,&lt;br /&gt;
            #first_name: ApplicationHelper.get_user_first_name(user),&lt;br /&gt;
            message: &amp;quot;&amp;quot;,&lt;br /&gt;
            partial_name: mail_partial&lt;br /&gt;
        }&lt;br /&gt;
    })&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mailer.rb'''&lt;br /&gt;
&lt;br /&gt;
*Mailer function to send the mail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def notify_reviewer_for_new_submission(defn)&lt;br /&gt;
    @partial_name = defn[:body][:partial_name]&lt;br /&gt;
    @user = defn[:body][:user]&lt;br /&gt;
    #@first_name = defn[:body][:first_name]&lt;br /&gt;
    @message = defn[:body][:message]&lt;br /&gt;
    mail(subject: defn[:subject],&lt;br /&gt;
         to: defn[:to])&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/views/mailer/notify_reviewer_for_new_submission.erb'''&lt;br /&gt;
&lt;br /&gt;
*Email template for the mail&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;head&amp;gt;&lt;br /&gt;
  &amp;lt;meta content='text/html; charset=UTF-8' http-equiv='Content-Type' /&amp;gt;&lt;br /&gt;
&amp;lt;/head&amp;gt;&lt;br /&gt;
&amp;lt;body&amp;gt;&lt;br /&gt;
&amp;lt;%= render :partial =&amp;gt; 'mailer/partials/'+@partial_name+'_html' %&amp;gt;&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;/body&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue3 (111)- Adding relevant links to reminder emails.====&lt;br /&gt;
&lt;br /&gt;
*Modified function email_remainder to add functionality.&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mail_worker.rb'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.assignment_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      user = User.where({email: mail}).first&lt;br /&gt;
      participant_assignment_id = Participant.where(user_id: user.id, parent_id: self.assignment_id).id&lt;br /&gt;
&lt;br /&gt;
      link_to_destination = &amp;quot;Please follow the lilnk: http://expertiza.ncsu.edu/student_task/view?id=#{participant_assignment_id}&amp;quot;&lt;br /&gt;
      body = body + link_to_destination + &amp;quot; Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
      @mail = Mailer.delayed_message(bcc: mail, subject: subject, body: body)&lt;br /&gt;
      @mail.deliver_now&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    @mail = Mailer.delayed_message(bcc: emails, subject: subject, body: body)&lt;br /&gt;
    @mail.deliver_now&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Test Plan===&lt;br /&gt;
&lt;br /&gt;
====Issue 1====&lt;br /&gt;
&lt;br /&gt;
Step 1: Navigate to Manage --&amp;gt; Assignment page.&lt;br /&gt;
&lt;br /&gt;
Step 2: Click on add participants for any of the assignments.&lt;br /&gt;
&lt;br /&gt;
Step 3: Click &amp;quot;Import course participants&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Step 4: Choose a csv file to be imported (follow the format given on the website).&lt;br /&gt;
&lt;br /&gt;
Step 5: The users mentioned in the csv file and don't exist on Expertiza should get a new user email.&lt;br /&gt;
&lt;br /&gt;
Step 6: To check e-mail is received or not, log in with following credentials:  username [ 'expertiza.development@gmail.com' ] password [ 'qwer@1234' ].&lt;br /&gt;
&lt;br /&gt;
====Issue 2====&lt;br /&gt;
&lt;br /&gt;
Step 1: Create new assignment [Manage --&amp;gt; Assignment --&amp;gt; + Button (to create new assignment)]&lt;br /&gt;
&lt;br /&gt;
Step 2: Fill the details for the assignments.&lt;br /&gt;
&lt;br /&gt;
Step 3: Navigate to due dates.&lt;br /&gt;
&lt;br /&gt;
Step 4: Change the number of review rounds to 2.&lt;br /&gt;
&lt;br /&gt;
Step 5: Select &amp;quot;Yes&amp;quot; in the dropdown for review allowed for submission.&lt;br /&gt;
&lt;br /&gt;
Step 6: Add the user to the assignment.&lt;br /&gt;
&lt;br /&gt;
Step 7: Log in with some user credentials (author credential).&lt;br /&gt;
&lt;br /&gt;
Step 8: Make a new submission to this assignment.&lt;br /&gt;
&lt;br /&gt;
Step 9: Log in with another user (reviewer).&lt;br /&gt;
&lt;br /&gt;
Step 10: Submit a review to the assignment submission.&lt;br /&gt;
&lt;br /&gt;
Step 11: Login as an author again. &lt;br /&gt;
&lt;br /&gt;
Step 12: Edit the submission.&lt;br /&gt;
&lt;br /&gt;
Step 13: After this check the mailbox of the reviewer [development mail for development].&lt;br /&gt;
&lt;br /&gt;
Step 14: Reviewer should get the mail to re-review the work.&lt;br /&gt;
&lt;br /&gt;
Step 15: Change the due date to some date and time which has passed.&lt;br /&gt;
&lt;br /&gt;
Step 16: Now making a new submission from the author account should not send a re-review mail to the reviewer. [Repeat steps 7-15].&lt;br /&gt;
&lt;br /&gt;
====Issue 3====&lt;br /&gt;
&lt;br /&gt;
This test requires an approaching deadline scenario. Since the reminder mail goes through as a sidekiq background, the part of this issue was to fix the email link.&lt;br /&gt;
&lt;br /&gt;
Create a new scenario with an approaching deadline. The email is sent already, just the link is fixed.&lt;br /&gt;
&lt;br /&gt;
===Team Member Details===&lt;br /&gt;
&lt;br /&gt;
Team members :&lt;br /&gt;
* Adarsh Trivedi (atrived)&lt;br /&gt;
* Dheeraj Makam (drmakam)&lt;br /&gt;
* Prachi Sheoran (psheora)&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=127937</id>
		<title>CSC/ECE 517 Fall 2019 - E1940. Improving email notification</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=127937"/>
		<updated>2019-11-07T03:38:27Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Issue 1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=E1940 Improving e-mail notification=&lt;br /&gt;
&lt;br /&gt;
===Brief Introduction===&lt;br /&gt;
 &lt;br /&gt;
* E1940 Improving e-mail notification.&lt;br /&gt;
&lt;br /&gt;
* The forked git repository for this project can be found [https://github.com/dheerajmakam/expertiza.git]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The following tasks were accomplished in this project:&lt;br /&gt;
&lt;br /&gt;
*Issue1: Send new account welcome email to user, when imported from CSV through assignment page.&lt;br /&gt;
*Issue2: Don't send email to reviewers for a new submission after review deadline has passed.&lt;br /&gt;
*Issue3: Adding relevant links to reminder emails.&lt;br /&gt;
&lt;br /&gt;
====Issue 1 (717) - New user email====&lt;br /&gt;
&lt;br /&gt;
'''app/models/assignment_participant.rb'''&lt;br /&gt;
&lt;br /&gt;
* Call method to send mail after user imported successfully.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    password = user.reset_password&lt;br /&gt;
    MailerHelper.send_mail_to_user(user, &amp;quot;Your Expertiza account has been created.&amp;quot;, &amp;quot;user_welcome&amp;quot;, password).deliver&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue2 (345)- No submission email to reviewer after deadline====&lt;br /&gt;
*'''Before fixing this issue, we had to write the logic to send emails to reviewers on submissions.'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''app/controllers/submitted_content_controller.rb'''&lt;br /&gt;
*Added the logic to check for last review date to the function '''submit_hyperlink'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
      # get current date and time&lt;br /&gt;
      cur_date = Time.now&lt;br /&gt;
      # get the last due date for the review&lt;br /&gt;
      max_due_date = DueDate.where(parent_id: @participant.assignment.id,deadline_type_id: 2).maximum(&amp;quot;due_at&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
      # send mail only if the last due date has not passed&lt;br /&gt;
      if cur_date &amp;lt;= max_due_date&lt;br /&gt;
        email_submission_reviewers(@participant)&lt;br /&gt;
      end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Function to identify the reviewers and send mails.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def email_submission_reviewers(participant)&lt;br /&gt;
    if participant.reviewers.length != 0&lt;br /&gt;
      participant.reviewers.each do |reviewer|&lt;br /&gt;
        rev_res_map = ReviewResponseMap.where(['reviewer_id = ? and reviewee_id = ?', reviewer.id, participant.team.id]).first&lt;br /&gt;
        all_responses = Response.where(:map_id =&amp;gt; rev_res_map.id).order(&amp;quot;updated_at DESC&amp;quot;).first&lt;br /&gt;
&lt;br /&gt;
        user = User.find(reviewer.user_id)&lt;br /&gt;
&lt;br /&gt;
        if user.email_on_submission?&lt;br /&gt;
          MailerHelper.submission_mail_to_reviewr(user,&lt;br /&gt;
                                             &amp;quot;New submission available to review.&amp;quot;,&lt;br /&gt;
                                             &amp;quot;update&amp;quot;).deliver&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/helpers/mailer_helper.rb'''&lt;br /&gt;
&lt;br /&gt;
*Helper function to mail reviewers&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def self.submission_mail_to_reviewr(user, subject, mail_partial)&lt;br /&gt;
    Mailer.notify_reviewer_for_new_submission ({&lt;br /&gt;
        to: user.email,&lt;br /&gt;
        subject: subject,&lt;br /&gt;
        body: {&lt;br /&gt;
            user: user,&lt;br /&gt;
            #first_name: ApplicationHelper.get_user_first_name(user),&lt;br /&gt;
            message: &amp;quot;&amp;quot;,&lt;br /&gt;
            partial_name: mail_partial&lt;br /&gt;
        }&lt;br /&gt;
    })&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mailer.rb'''&lt;br /&gt;
&lt;br /&gt;
*Mailer function to send the mail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def notify_reviewer_for_new_submission(defn)&lt;br /&gt;
    @partial_name = defn[:body][:partial_name]&lt;br /&gt;
    @user = defn[:body][:user]&lt;br /&gt;
    #@first_name = defn[:body][:first_name]&lt;br /&gt;
    @message = defn[:body][:message]&lt;br /&gt;
    mail(subject: defn[:subject],&lt;br /&gt;
         to: defn[:to])&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/views/mailer/notify_reviewer_for_new_submission.erb'''&lt;br /&gt;
&lt;br /&gt;
*Email template for the mail&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;head&amp;gt;&lt;br /&gt;
  &amp;lt;meta content='text/html; charset=UTF-8' http-equiv='Content-Type' /&amp;gt;&lt;br /&gt;
&amp;lt;/head&amp;gt;&lt;br /&gt;
&amp;lt;body&amp;gt;&lt;br /&gt;
&amp;lt;%= render :partial =&amp;gt; 'mailer/partials/'+@partial_name+'_html' %&amp;gt;&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;/body&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue3 (111)- Adding relevant links to reminder emails.====&lt;br /&gt;
&lt;br /&gt;
*Modified function email_remainder to add functionality.&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mail_worker.rb'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.assignment_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      user = User.where({email: mail}).first&lt;br /&gt;
      participant_assignment_id = Participant.where(user_id: user.id, parent_id: self.assignment_id).id&lt;br /&gt;
&lt;br /&gt;
      link_to_destination = &amp;quot;Please follow the lilnk: http://expertiza.ncsu.edu/student_task/view?id=#{participant_assignment_id}&amp;quot;&lt;br /&gt;
      body = body + link_to_destination + &amp;quot; Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
      @mail = Mailer.delayed_message(bcc: mail, subject: subject, body: body)&lt;br /&gt;
      @mail.deliver_now&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    @mail = Mailer.delayed_message(bcc: emails, subject: subject, body: body)&lt;br /&gt;
    @mail.deliver_now&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Test Plan===&lt;br /&gt;
&lt;br /&gt;
====Issue 1====&lt;br /&gt;
&lt;br /&gt;
Step 1: Navigate to Manage --&amp;gt; Assignment page.&lt;br /&gt;
&lt;br /&gt;
Step 2: Click on add participants for any of the assignments.&lt;br /&gt;
&lt;br /&gt;
Step 3: Click &amp;quot;Import course participants&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Step 4: Choose a csv file to be imported (follow the format given on the website).&lt;br /&gt;
&lt;br /&gt;
Step 5: The users mentioned in the csv file and don't exist on Expertiza should get a new user email.&lt;br /&gt;
&lt;br /&gt;
Step 6: To check e-mail is received or not, log in with following credentials:  username [ 'expertiza.development@gmail.com' ] password [ 'qwer@1234' ].&lt;br /&gt;
&lt;br /&gt;
====Issue 2====&lt;br /&gt;
&lt;br /&gt;
Step 1: Create new assignment [Manage/Assignment/+ Button (to create new assignment)]&lt;br /&gt;
&lt;br /&gt;
Step 2: Fill the details for the assignments.&lt;br /&gt;
&lt;br /&gt;
Step 3: Navigate to due dates.&lt;br /&gt;
&lt;br /&gt;
Step 4: Change the number of review rounds to 2.&lt;br /&gt;
&lt;br /&gt;
Step 5: Select &amp;quot;Yes&amp;quot; in the dropdown for review allowed for submission.&lt;br /&gt;
&lt;br /&gt;
Step 6: Add the user to the assignment.&lt;br /&gt;
&lt;br /&gt;
Step 7: Log in with some user credentials (author credential).&lt;br /&gt;
&lt;br /&gt;
Step 8: Make a new submission to this assignment.&lt;br /&gt;
&lt;br /&gt;
Step 9: Log in with another user (reviewer).&lt;br /&gt;
&lt;br /&gt;
Step 10: Submit a review to the assignment submission.&lt;br /&gt;
&lt;br /&gt;
Step 11: Login as an author again. &lt;br /&gt;
&lt;br /&gt;
Step 12: Edit the submission.&lt;br /&gt;
&lt;br /&gt;
Step 13: After this check the mailbox of the reviewer [development mail for development].&lt;br /&gt;
&lt;br /&gt;
Step 14: Reviewer should get the mail to re-review the work.&lt;br /&gt;
&lt;br /&gt;
Step 15: Change the due date to some date and time which has passed.&lt;br /&gt;
&lt;br /&gt;
Step 16: Now making a new submission from the author account should not send a re-review mail to the reviewer. [Repeat steps 7-15]. &lt;br /&gt;
&lt;br /&gt;
====Issue 3====&lt;br /&gt;
&lt;br /&gt;
This test requires an approaching deadline scenario. Since the reminder mail goes through as a sidekiq background, the part of this issue was to fix the email link.&lt;br /&gt;
&lt;br /&gt;
Create a new scenario with an approaching deadline. The email is sent already, just the link is fixed.&lt;br /&gt;
&lt;br /&gt;
===Team Member Details===&lt;br /&gt;
&lt;br /&gt;
Team members :&lt;br /&gt;
* Adarsh Trivedi (atrived)&lt;br /&gt;
* Dheeraj Makam (drmakam)&lt;br /&gt;
* Prachi Sheoran (psheora)&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=127930</id>
		<title>CSC/ECE 517 Fall 2019 - E1940. Improving email notification</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=127930"/>
		<updated>2019-11-07T03:33:58Z</updated>

		<summary type="html">&lt;p&gt;Atrived: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=E1940 Improving e-mail notification=&lt;br /&gt;
&lt;br /&gt;
===Brief Introduction===&lt;br /&gt;
 &lt;br /&gt;
* E1940 Improving e-mail notification.&lt;br /&gt;
&lt;br /&gt;
* The forked git repository for this project can be found [https://github.com/dheerajmakam/expertiza.git]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The following tasks were accomplished in this project:&lt;br /&gt;
&lt;br /&gt;
*Issue1: Send new account welcome email to user, when imported from CSV through assignment page.&lt;br /&gt;
*Issue2: Don't send email to reviewers for a new submission after review deadline has passed.&lt;br /&gt;
*Issue3: Adding relevant links to reminder emails.&lt;br /&gt;
&lt;br /&gt;
====Issue 1 (717) - New user email====&lt;br /&gt;
&lt;br /&gt;
'''app/models/assignment_participant.rb'''&lt;br /&gt;
&lt;br /&gt;
* Call method to send mail after user imported successfully.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    password = user.reset_password&lt;br /&gt;
    MailerHelper.send_mail_to_user(user, &amp;quot;Your Expertiza account has been created.&amp;quot;, &amp;quot;user_welcome&amp;quot;, password).deliver&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue2 (345)- No submission email to reviewer after deadline====&lt;br /&gt;
*'''Before fixing this issue, we had to write the logic to send emails to reviewers on submissions.'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''app/controllers/submitted_content_controller.rb'''&lt;br /&gt;
*Added the logic to check for last review date to the function '''submit_hyperlink'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
      # get current date and time&lt;br /&gt;
      cur_date = Time.now&lt;br /&gt;
      # get the last due date for the review&lt;br /&gt;
      max_due_date = DueDate.where(parent_id: @participant.assignment.id,deadline_type_id: 2).maximum(&amp;quot;due_at&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
      # send mail only if the last due date has not passed&lt;br /&gt;
      if cur_date &amp;lt;= max_due_date&lt;br /&gt;
        email_submission_reviewers(@participant)&lt;br /&gt;
      end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Function to identify the reviewers and send mails.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def email_submission_reviewers(participant)&lt;br /&gt;
    if participant.reviewers.length != 0&lt;br /&gt;
      participant.reviewers.each do |reviewer|&lt;br /&gt;
        rev_res_map = ReviewResponseMap.where(['reviewer_id = ? and reviewee_id = ?', reviewer.id, participant.team.id]).first&lt;br /&gt;
        all_responses = Response.where(:map_id =&amp;gt; rev_res_map.id).order(&amp;quot;updated_at DESC&amp;quot;).first&lt;br /&gt;
&lt;br /&gt;
        user = User.find(reviewer.user_id)&lt;br /&gt;
&lt;br /&gt;
        if user.email_on_submission?&lt;br /&gt;
          MailerHelper.submission_mail_to_reviewr(user,&lt;br /&gt;
                                             &amp;quot;New submission available to review.&amp;quot;,&lt;br /&gt;
                                             &amp;quot;update&amp;quot;).deliver&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/helpers/mailer_helper.rb'''&lt;br /&gt;
&lt;br /&gt;
*Helper function to mail reviewers&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def self.submission_mail_to_reviewr(user, subject, mail_partial)&lt;br /&gt;
    Mailer.notify_reviewer_for_new_submission ({&lt;br /&gt;
        to: user.email,&lt;br /&gt;
        subject: subject,&lt;br /&gt;
        body: {&lt;br /&gt;
            user: user,&lt;br /&gt;
            #first_name: ApplicationHelper.get_user_first_name(user),&lt;br /&gt;
            message: &amp;quot;&amp;quot;,&lt;br /&gt;
            partial_name: mail_partial&lt;br /&gt;
        }&lt;br /&gt;
    })&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mailer.rb'''&lt;br /&gt;
&lt;br /&gt;
*Mailer function to send the mail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def notify_reviewer_for_new_submission(defn)&lt;br /&gt;
    @partial_name = defn[:body][:partial_name]&lt;br /&gt;
    @user = defn[:body][:user]&lt;br /&gt;
    #@first_name = defn[:body][:first_name]&lt;br /&gt;
    @message = defn[:body][:message]&lt;br /&gt;
    mail(subject: defn[:subject],&lt;br /&gt;
         to: defn[:to])&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/views/mailer/notify_reviewer_for_new_submission.erb'''&lt;br /&gt;
&lt;br /&gt;
*Email template for the mail&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;head&amp;gt;&lt;br /&gt;
  &amp;lt;meta content='text/html; charset=UTF-8' http-equiv='Content-Type' /&amp;gt;&lt;br /&gt;
&amp;lt;/head&amp;gt;&lt;br /&gt;
&amp;lt;body&amp;gt;&lt;br /&gt;
&amp;lt;%= render :partial =&amp;gt; 'mailer/partials/'+@partial_name+'_html' %&amp;gt;&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;/body&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue3 (111)- Adding relevant links to reminder emails.====&lt;br /&gt;
&lt;br /&gt;
*Modified function email_remainder to add functionality.&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mail_worker.rb'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.assignment_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      user = User.where({email: mail}).first&lt;br /&gt;
      participant_assignment_id = Participant.where(user_id: user.id, parent_id: self.assignment_id).id&lt;br /&gt;
&lt;br /&gt;
      link_to_destination = &amp;quot;Please follow the lilnk: http://expertiza.ncsu.edu/student_task/view?id=#{participant_assignment_id}&amp;quot;&lt;br /&gt;
      body = body + link_to_destination + &amp;quot; Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
      @mail = Mailer.delayed_message(bcc: mail, subject: subject, body: body)&lt;br /&gt;
      @mail.deliver_now&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    @mail = Mailer.delayed_message(bcc: emails, subject: subject, body: body)&lt;br /&gt;
    @mail.deliver_now&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Test Plan===&lt;br /&gt;
&lt;br /&gt;
====Issue 1====&lt;br /&gt;
&lt;br /&gt;
Step 1: Navigate to Manage --&amp;gt; Assignment page.&lt;br /&gt;
&lt;br /&gt;
Step 2: Click on add participants for any of the assignments.&lt;br /&gt;
&lt;br /&gt;
Step 3: &amp;quot;Import&amp;quot; course participants.&lt;br /&gt;
&lt;br /&gt;
Step 4: Choose a csv file to be imported (follow the format given on the website).&lt;br /&gt;
&lt;br /&gt;
Step 5: The users mentioned in the csv file and don't exist on Expertiza should get a new user email.&lt;br /&gt;
&lt;br /&gt;
Step 6: To check e-mail is received or not, log in with following credentials:  username [ 'expertiza.development@gmail.com' ] password [ 'qwer@1234' ].&lt;br /&gt;
&lt;br /&gt;
====Issue 2====&lt;br /&gt;
&lt;br /&gt;
Step 1: Create new assignment [Manage/Assignment/+ Button (to create new assignment)]&lt;br /&gt;
&lt;br /&gt;
Step 2: Fill the details for the assignments.&lt;br /&gt;
&lt;br /&gt;
Step 3: Navigate to due dates.&lt;br /&gt;
&lt;br /&gt;
Step 4: Change the number of review rounds to 2.&lt;br /&gt;
&lt;br /&gt;
Step 5: Select &amp;quot;Yes&amp;quot; in the dropdown for review allowed for submission.&lt;br /&gt;
&lt;br /&gt;
Step 6: Add the user to the assignment.&lt;br /&gt;
&lt;br /&gt;
Step 7: Log in with some user credentials (author credential).&lt;br /&gt;
&lt;br /&gt;
Step 8: Make a new submission to this assignment.&lt;br /&gt;
&lt;br /&gt;
Step 9: Log in with another user (reviewer).&lt;br /&gt;
&lt;br /&gt;
Step 10: Submit a review to the assignment submission.&lt;br /&gt;
&lt;br /&gt;
Step 11: Login as an author again. &lt;br /&gt;
&lt;br /&gt;
Step 12: Edit the submission.&lt;br /&gt;
&lt;br /&gt;
Step 13: After this check the mailbox of the reviewer [development mail for development].&lt;br /&gt;
&lt;br /&gt;
Step 14: Reviewer should get the mail to re-review the work.&lt;br /&gt;
&lt;br /&gt;
Step 15: Change the due date to some date and time which has passed.&lt;br /&gt;
&lt;br /&gt;
Step 16: Now making a new submission from the author account should not send a re-review mail to the reviewer. [Repeat steps 7-15]. &lt;br /&gt;
&lt;br /&gt;
====Issue 3====&lt;br /&gt;
&lt;br /&gt;
This test requires an approaching deadline scenario. Since the reminder mail goes through as a sidekiq background, the part of this issue was to fix the email link.&lt;br /&gt;
&lt;br /&gt;
Create a new scenario with an approaching deadline. The email is sent already, just the link is fixed.&lt;br /&gt;
&lt;br /&gt;
===Team Member Details===&lt;br /&gt;
&lt;br /&gt;
Team members :&lt;br /&gt;
* Adarsh Trivedi (atrived)&lt;br /&gt;
* Dheeraj Makam (drmakam)&lt;br /&gt;
* Prachi Sheoran (psheora)&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126576</id>
		<title>CSC/ECE 517 Fall 2019 - E1940. Improving email notification</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126576"/>
		<updated>2019-10-29T03:58:34Z</updated>

		<summary type="html">&lt;p&gt;Atrived: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=E1940 Improving e-mail notification=&lt;br /&gt;
&lt;br /&gt;
===Brief Introduction===&lt;br /&gt;
 &lt;br /&gt;
* E1940 Improving e-mail notification.&lt;br /&gt;
&lt;br /&gt;
* The forked git repository for this project can be found [https://github.com/dheerajmakam/expertiza.git]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The following tasks were accomplished in this project:&lt;br /&gt;
&lt;br /&gt;
*Issue1: Send new account welcome email to user, when imported from CSV through assignment page.&lt;br /&gt;
*Issue2: Don't send email to reviewers for a new submission after review deadline has passed.&lt;br /&gt;
*Issue3: Adding relevant links to reminder emails.&lt;br /&gt;
&lt;br /&gt;
====Issue 1 - New user email====&lt;br /&gt;
&lt;br /&gt;
'''app/models/assignment_participant.rb'''&lt;br /&gt;
&lt;br /&gt;
* Call method to send mail after user imported successfully.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    password = user.reset_password&lt;br /&gt;
    MailerHelper.send_mail_to_user(user, &amp;quot;Your Expertiza account has been created.&amp;quot;, &amp;quot;user_welcome&amp;quot;, password).deliver&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue2 - No submission email to reviewer after deadline====&lt;br /&gt;
*'''Before fixing this issue, we had to write the logic to send emails to reviewers on submissions.'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''app/controllers/submitted_content_controller.rb'''&lt;br /&gt;
*Added the logic to check for last review date to the function '''submit_hyperlink'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
      # get current date and time&lt;br /&gt;
      cur_date = Time.now&lt;br /&gt;
      # get the last due date for the review&lt;br /&gt;
      max_due_date = DueDate.where(parent_id: @participant.assignment.id,deadline_type_id: 2).maximum(&amp;quot;due_at&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
      # send mail only if the last due date has not passed&lt;br /&gt;
      if cur_date &amp;lt;= max_due_date&lt;br /&gt;
        email_submission_reviewers(@participant)&lt;br /&gt;
      end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Function to identify the reviewers and send mails.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def email_submission_reviewers(participant)&lt;br /&gt;
    if participant.reviewers.length != 0&lt;br /&gt;
      participant.reviewers.each do |reviewer|&lt;br /&gt;
        rev_res_map = ReviewResponseMap.where(['reviewer_id = ? and reviewee_id = ?', reviewer.id, participant.team.id]).first&lt;br /&gt;
        all_responses = Response.where(:map_id =&amp;gt; rev_res_map.id).order(&amp;quot;updated_at DESC&amp;quot;).first&lt;br /&gt;
&lt;br /&gt;
        user = User.find(reviewer.user_id)&lt;br /&gt;
&lt;br /&gt;
        if user.email_on_submission?&lt;br /&gt;
          MailerHelper.submission_mail_to_reviewr(user,&lt;br /&gt;
                                             &amp;quot;New submission available to review.&amp;quot;,&lt;br /&gt;
                                             &amp;quot;update&amp;quot;).deliver&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/helpers/mailer_helper.rb'''&lt;br /&gt;
&lt;br /&gt;
*Helper function to mail reviewers&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def self.submission_mail_to_reviewr(user, subject, mail_partial)&lt;br /&gt;
    Mailer.notify_reviewer_for_new_submission ({&lt;br /&gt;
        to: user.email,&lt;br /&gt;
        subject: subject,&lt;br /&gt;
        body: {&lt;br /&gt;
            user: user,&lt;br /&gt;
            #first_name: ApplicationHelper.get_user_first_name(user),&lt;br /&gt;
            message: &amp;quot;&amp;quot;,&lt;br /&gt;
            partial_name: mail_partial&lt;br /&gt;
        }&lt;br /&gt;
    })&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mailer.rb'''&lt;br /&gt;
&lt;br /&gt;
*Mailer function to send the mail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def notify_reviewer_for_new_submission(defn)&lt;br /&gt;
    @partial_name = defn[:body][:partial_name]&lt;br /&gt;
    @user = defn[:body][:user]&lt;br /&gt;
    #@first_name = defn[:body][:first_name]&lt;br /&gt;
    @message = defn[:body][:message]&lt;br /&gt;
    mail(subject: defn[:subject],&lt;br /&gt;
         to: defn[:to])&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/views/mailer/notify_reviewer_for_new_submission.erb'''&lt;br /&gt;
&lt;br /&gt;
*Email template for the mail&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;head&amp;gt;&lt;br /&gt;
  &amp;lt;meta content='text/html; charset=UTF-8' http-equiv='Content-Type' /&amp;gt;&lt;br /&gt;
&amp;lt;/head&amp;gt;&lt;br /&gt;
&amp;lt;body&amp;gt;&lt;br /&gt;
&amp;lt;%= render :partial =&amp;gt; 'mailer/partials/'+@partial_name+'_html' %&amp;gt;&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;/body&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue3 - Adding relevant links to reminder emails.====&lt;br /&gt;
&lt;br /&gt;
*Modified function email_remainder to add functionality.&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mail_worker.rb'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.assignment_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      user = User.where({email: mail}).first&lt;br /&gt;
      participant_assignment_id = Participant.where(user_id: user.id, parent_id: self.assignment_id).id&lt;br /&gt;
&lt;br /&gt;
      link_to_destination = &amp;quot;Please follow the lilnk: http://expertiza.ncsu.edu/student_task/view?id=#{participant_assignment_id}&amp;quot;&lt;br /&gt;
      body = body + link_to_destination + &amp;quot; Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
      @mail = Mailer.delayed_message(bcc: mail, subject: subject, body: body)&lt;br /&gt;
      @mail.deliver_now&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    @mail = Mailer.delayed_message(bcc: emails, subject: subject, body: body)&lt;br /&gt;
    @mail.deliver_now&lt;br /&gt;
&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Team Member Details===&lt;br /&gt;
&lt;br /&gt;
Team members :&lt;br /&gt;
* Adarsh Trivedi (atrived)&lt;br /&gt;
* Dheeraj Makam (drmakam)&lt;br /&gt;
* Prachi Sheoran (psheora)&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126430</id>
		<title>CSC/ECE 517 Fall 2019 - E1940. Improving email notification</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126430"/>
		<updated>2019-10-29T02:59:15Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* Team Member Details */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=E1940 Improving e-mail notification=&lt;br /&gt;
&lt;br /&gt;
===Brief Introduction===&lt;br /&gt;
 &lt;br /&gt;
* E1940 Improving e-mail notification.&lt;br /&gt;
&lt;br /&gt;
* The forked git repository for this project can be found [https://github.com/dheerajmakam/expertiza.git]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The following tasks were accomplished in this project:&lt;br /&gt;
&lt;br /&gt;
*Issue1: Send new account welcome email to user, when imported from CSV through assignment page.&lt;br /&gt;
*Issue2: Don't send email to reviewers for a new submission after review deadline has passed.&lt;br /&gt;
*Issue3: Adding relevant links to reminder emails.&lt;br /&gt;
&lt;br /&gt;
====Issue 1 - New user email====&lt;br /&gt;
&lt;br /&gt;
'''app/models/assignment_participant.rb'''&lt;br /&gt;
&lt;br /&gt;
* Call method to send mail after user imported successfully.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    password = user.reset_password&lt;br /&gt;
    MailerHelper.send_mail_to_user(user, &amp;quot;Your Expertiza account has been created.&amp;quot;, &amp;quot;user_welcome&amp;quot;, password).deliver&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue2 - No submission email to reviewer after deadline====&lt;br /&gt;
*'''Before fixing this issue, we had to write the logic to send emails to reviewers on submissions.'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''app/controllers/submitted_content_controller.rb'''&lt;br /&gt;
*Added the logic to check for last review date to the function '''submit_hyperlink'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
      # get current date and time&lt;br /&gt;
      cur_date = Time.now&lt;br /&gt;
      # get the last due date for the review&lt;br /&gt;
      max_due_date = DueDate.where(parent_id: @participant.assignment.id,deadline_type_id: 2).maximum(&amp;quot;due_at&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
      # send mail only if the last due date has not passed&lt;br /&gt;
      if cur_date &amp;lt;= max_due_date&lt;br /&gt;
        email_submission_reviewers(@participant)&lt;br /&gt;
      end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Function to identify the reviewers and send mails.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def email_submission_reviewers(participant)&lt;br /&gt;
    if participant.reviewers.length != 0&lt;br /&gt;
      participant.reviewers.each do |reviewer|&lt;br /&gt;
        rev_res_map = ReviewResponseMap.where(['reviewer_id = ? and reviewee_id = ?', reviewer.id, participant.team.id]).first&lt;br /&gt;
        all_responses = Response.where(:map_id =&amp;gt; rev_res_map.id).order(&amp;quot;updated_at DESC&amp;quot;).first&lt;br /&gt;
&lt;br /&gt;
        user = User.find(reviewer.user_id)&lt;br /&gt;
&lt;br /&gt;
        if user.email_on_submission?&lt;br /&gt;
          MailerHelper.submission_mail_to_reviewr(user,&lt;br /&gt;
                                             &amp;quot;New submission available to review.&amp;quot;,&lt;br /&gt;
                                             &amp;quot;update&amp;quot;).deliver&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/helpers/mailer_helper.rb'''&lt;br /&gt;
&lt;br /&gt;
*Helper function to mail reviewers&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def self.submission_mail_to_reviewr(user, subject, mail_partial)&lt;br /&gt;
    Mailer.notify_reviewer_for_new_submission ({&lt;br /&gt;
        to: user.email,&lt;br /&gt;
        subject: subject,&lt;br /&gt;
        body: {&lt;br /&gt;
            user: user,&lt;br /&gt;
            #first_name: ApplicationHelper.get_user_first_name(user),&lt;br /&gt;
            message: &amp;quot;&amp;quot;,&lt;br /&gt;
            partial_name: mail_partial&lt;br /&gt;
        }&lt;br /&gt;
    })&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mailer.rb'''&lt;br /&gt;
&lt;br /&gt;
*Mailer function to send the mail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def notify_reviewer_for_new_submission(defn)&lt;br /&gt;
    @partial_name = defn[:body][:partial_name]&lt;br /&gt;
    @user = defn[:body][:user]&lt;br /&gt;
    #@first_name = defn[:body][:first_name]&lt;br /&gt;
    @message = defn[:body][:message]&lt;br /&gt;
    mail(subject: defn[:subject],&lt;br /&gt;
         to: defn[:to])&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/views/mailer/notify_reviewer_for_new_submission.erb'''&lt;br /&gt;
&lt;br /&gt;
*Email template for the mail&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;head&amp;gt;&lt;br /&gt;
  &amp;lt;meta content='text/html; charset=UTF-8' http-equiv='Content-Type' /&amp;gt;&lt;br /&gt;
&amp;lt;/head&amp;gt;&lt;br /&gt;
&amp;lt;body&amp;gt;&lt;br /&gt;
&amp;lt;%= render :partial =&amp;gt; 'mailer/partials/'+@partial_name+'_html' %&amp;gt;&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;/body&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue3 - Adding relevant links to reminder emails.====&lt;br /&gt;
&lt;br /&gt;
===Team Member Details===&lt;br /&gt;
&lt;br /&gt;
Team members :&lt;br /&gt;
* Adarsh Trivedi (atrived)&lt;br /&gt;
* Dheeraj Makam (drmakam)&lt;br /&gt;
* Prachi Sheoran (psheora)&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126219</id>
		<title>CSC/ECE 517 Fall 2019 - E1940. Improving email notification</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126219"/>
		<updated>2019-10-29T00:35:21Z</updated>

		<summary type="html">&lt;p&gt;Atrived: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=E1940 Improving e-mail notification=&lt;br /&gt;
&lt;br /&gt;
===Brief Introduction===&lt;br /&gt;
 &lt;br /&gt;
* E1940 Improving e-mail notification.&lt;br /&gt;
&lt;br /&gt;
* The forked git repository for this project can be found [https://github.com/dheerajmakam/expertiza.git]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The following tasks were accomplished in this project:&lt;br /&gt;
&lt;br /&gt;
*Issue1: Send new account welcome email to user, when imported from CSV through assignment page.&lt;br /&gt;
*Issue2: Don't send email to reviewers for a new submission after review deadline has passed.&lt;br /&gt;
*Issue3: Adding relevant links to reminder emails.&lt;br /&gt;
&lt;br /&gt;
====Issue 1 - New user email====&lt;br /&gt;
&lt;br /&gt;
'''app/models/assignment_participant.rb'''&lt;br /&gt;
&lt;br /&gt;
* Call method to send mail after user imported successfully.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    password = user.reset_password&lt;br /&gt;
    MailerHelper.send_mail_to_user(user, &amp;quot;Your Expertiza account has been created.&amp;quot;, &amp;quot;user_welcome&amp;quot;, password).deliver&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue2 - No submission email to reviewer after deadline====&lt;br /&gt;
*'''Before fixing this issue, we had to write the logic to send emails to reviewers on submissions.'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''app/controllers/submitted_content_controller.rb'''&lt;br /&gt;
*Added the logic to check for last review date to the function '''submit_hyperlink'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
      # get current date and time&lt;br /&gt;
      cur_date = Time.now&lt;br /&gt;
      # get the last due date for the review&lt;br /&gt;
      max_due_date = DueDate.where(parent_id: @participant.assignment.id,deadline_type_id: 2).maximum(&amp;quot;due_at&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
      # send mail only if the last due date has not passed&lt;br /&gt;
      if cur_date &amp;lt;= max_due_date&lt;br /&gt;
        email_submission_reviewers(@participant)&lt;br /&gt;
      end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Function to identify the reviewers and send mails.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def email_submission_reviewers(participant)&lt;br /&gt;
    if participant.reviewers.length != 0&lt;br /&gt;
      participant.reviewers.each do |reviewer|&lt;br /&gt;
        rev_res_map = ReviewResponseMap.where(['reviewer_id = ? and reviewee_id = ?', reviewer.id, participant.team.id]).first&lt;br /&gt;
        all_responses = Response.where(:map_id =&amp;gt; rev_res_map.id).order(&amp;quot;updated_at DESC&amp;quot;).first&lt;br /&gt;
&lt;br /&gt;
        user = User.find(reviewer.user_id)&lt;br /&gt;
&lt;br /&gt;
        if user.email_on_submission?&lt;br /&gt;
          MailerHelper.submission_mail_to_reviewr(user,&lt;br /&gt;
                                             &amp;quot;New submission available to review.&amp;quot;,&lt;br /&gt;
                                             &amp;quot;update&amp;quot;).deliver&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/helpers/mailer_helper.rb'''&lt;br /&gt;
&lt;br /&gt;
*Helper function to mail reviewers&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def self.submission_mail_to_reviewr(user, subject, mail_partial)&lt;br /&gt;
    Mailer.notify_reviewer_for_new_submission ({&lt;br /&gt;
        to: user.email,&lt;br /&gt;
        subject: subject,&lt;br /&gt;
        body: {&lt;br /&gt;
            user: user,&lt;br /&gt;
            #first_name: ApplicationHelper.get_user_first_name(user),&lt;br /&gt;
            message: &amp;quot;&amp;quot;,&lt;br /&gt;
            partial_name: mail_partial&lt;br /&gt;
        }&lt;br /&gt;
    })&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mailer.rb'''&lt;br /&gt;
&lt;br /&gt;
*Mailer function to send the mail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def notify_reviewer_for_new_submission(defn)&lt;br /&gt;
    @partial_name = defn[:body][:partial_name]&lt;br /&gt;
    @user = defn[:body][:user]&lt;br /&gt;
    #@first_name = defn[:body][:first_name]&lt;br /&gt;
    @message = defn[:body][:message]&lt;br /&gt;
    mail(subject: defn[:subject],&lt;br /&gt;
         to: defn[:to])&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/views/mailer/notify_reviewer_for_new_submission.erb'''&lt;br /&gt;
&lt;br /&gt;
*Email template for the mail&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;head&amp;gt;&lt;br /&gt;
  &amp;lt;meta content='text/html; charset=UTF-8' http-equiv='Content-Type' /&amp;gt;&lt;br /&gt;
&amp;lt;/head&amp;gt;&lt;br /&gt;
&amp;lt;body&amp;gt;&lt;br /&gt;
&amp;lt;%= render :partial =&amp;gt; 'mailer/partials/'+@partial_name+'_html' %&amp;gt;&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;/body&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Issue3 - Adding relevant links to reminder emails.====&lt;br /&gt;
&lt;br /&gt;
===Team Member Details===&lt;br /&gt;
&lt;br /&gt;
Team members :&lt;br /&gt;
* Adarsh Trivedi (atrived)&lt;br /&gt;
* Dheeraj&lt;br /&gt;
* Prachi&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126090</id>
		<title>CSC/ECE 517 Fall 2019 - E1940. Improving email notification</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126090"/>
		<updated>2019-10-28T23:09:39Z</updated>

		<summary type="html">&lt;p&gt;Atrived: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=E1940 Improving e-mail notification=&lt;br /&gt;
&lt;br /&gt;
Team members :&lt;br /&gt;
* Adarsh Trivedi (atrived)&lt;br /&gt;
* Dheeraj&lt;br /&gt;
* Prachi&lt;br /&gt;
&lt;br /&gt;
===Brief Introduction===&lt;br /&gt;
 &lt;br /&gt;
* E1940 Improving e-mail notification.&lt;br /&gt;
&lt;br /&gt;
* The forked git repository for this project can be found [https://github.com/dheerajmakam/expertiza.git]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The following tasks were accomplished in this project:&lt;br /&gt;
&lt;br /&gt;
*Issue1: Send new account welcome email to user, when imported from CSV through assignment page.&lt;br /&gt;
*Issue2: Don't send email to reviewers for a new submission after review deadline has passed.&lt;br /&gt;
*Issue3: Adding relevant links to reminder emails.&lt;br /&gt;
&lt;br /&gt;
===Issue 1 - New user email===&lt;br /&gt;
&lt;br /&gt;
'''app/models/assignment_participant.rb'''&lt;br /&gt;
&lt;br /&gt;
* Call method to send mail after user imported successfully.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    password = user.reset_password&lt;br /&gt;
    MailerHelper.send_mail_to_user(user, &amp;quot;Your Expertiza account has been created.&amp;quot;, &amp;quot;user_welcome&amp;quot;, password).deliver&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Issue2 - No submission email to reviewer after deadline===&lt;br /&gt;
*'''Before fixing this issue, we had to write the logic to send emails to reviewers on submissions.'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''app/controllers/submitted_content_controller.rb'''&lt;br /&gt;
*Added the logic to check for last review date to the function '''submit_hyperlink'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
      # get current date and time&lt;br /&gt;
      cur_date = Time.now&lt;br /&gt;
      # get the last due date for the review&lt;br /&gt;
      max_due_date = DueDate.where(parent_id: @participant.assignment.id,deadline_type_id: 2).maximum(&amp;quot;due_at&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
      # send mail only if the last due date has not passed&lt;br /&gt;
      if cur_date &amp;lt;= max_due_date&lt;br /&gt;
        email_submission_reviewers(@participant)&lt;br /&gt;
      end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Function to identify the reviewers and send mails.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def email_submission_reviewers(participant)&lt;br /&gt;
    if participant.reviewers.length != 0&lt;br /&gt;
      participant.reviewers.each do |reviewer|&lt;br /&gt;
        rev_res_map = ReviewResponseMap.where(['reviewer_id = ? and reviewee_id = ?', reviewer.id, participant.team.id]).first&lt;br /&gt;
        all_responses = Response.where(:map_id =&amp;gt; rev_res_map.id).order(&amp;quot;updated_at DESC&amp;quot;).first&lt;br /&gt;
&lt;br /&gt;
        user = User.find(reviewer.user_id)&lt;br /&gt;
&lt;br /&gt;
        if user.email_on_submission?&lt;br /&gt;
          MailerHelper.submission_mail_to_reviewr(user,&lt;br /&gt;
                                             &amp;quot;New submission available to review.&amp;quot;,&lt;br /&gt;
                                             &amp;quot;update&amp;quot;).deliver&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/helpers/mailer_helper.rb'''&lt;br /&gt;
&lt;br /&gt;
*Helper function to mail reviewers&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def self.submission_mail_to_reviewr(user, subject, mail_partial)&lt;br /&gt;
    Mailer.notify_reviewer_for_new_submission ({&lt;br /&gt;
        to: user.email,&lt;br /&gt;
        subject: subject,&lt;br /&gt;
        body: {&lt;br /&gt;
            user: user,&lt;br /&gt;
            #first_name: ApplicationHelper.get_user_first_name(user),&lt;br /&gt;
            message: &amp;quot;&amp;quot;,&lt;br /&gt;
            partial_name: mail_partial&lt;br /&gt;
        }&lt;br /&gt;
    })&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/mailers/mailer.rb'''&lt;br /&gt;
&lt;br /&gt;
*Mailer function to send the mail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def notify_reviewer_for_new_submission(defn)&lt;br /&gt;
    @partial_name = defn[:body][:partial_name]&lt;br /&gt;
    @user = defn[:body][:user]&lt;br /&gt;
    #@first_name = defn[:body][:first_name]&lt;br /&gt;
    @message = defn[:body][:message]&lt;br /&gt;
    mail(subject: defn[:subject],&lt;br /&gt;
         to: defn[:to])&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''app/views/mailer/notify_reviewer_for_new_submission.erb'''&lt;br /&gt;
&lt;br /&gt;
*Email template for the mail&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;head&amp;gt;&lt;br /&gt;
  &amp;lt;meta content='text/html; charset=UTF-8' http-equiv='Content-Type' /&amp;gt;&lt;br /&gt;
&amp;lt;/head&amp;gt;&lt;br /&gt;
&amp;lt;body&amp;gt;&lt;br /&gt;
&amp;lt;%= render :partial =&amp;gt; 'mailer/partials/'+@partial_name+'_html' %&amp;gt;&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;/body&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Issue3 - Adding relevant links to reminder emails.===&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126076</id>
		<title>CSC/ECE 517 Fall 2019 - E1940. Improving email notification</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126076"/>
		<updated>2019-10-28T23:00:54Z</updated>

		<summary type="html">&lt;p&gt;Atrived: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=E1940 Improving e-mail notification=&lt;br /&gt;
&lt;br /&gt;
Team members :&lt;br /&gt;
* Adarsh Trivedi (atrived)&lt;br /&gt;
* Dheeraj&lt;br /&gt;
* Prachi&lt;br /&gt;
&lt;br /&gt;
===Brief Introduction===&lt;br /&gt;
 &lt;br /&gt;
* E1940 Improving e-mail notification.&lt;br /&gt;
&lt;br /&gt;
* The forked git repository for this project can be found [https://github.com/dheerajmakam/expertiza.git]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The following tasks were accomplished in this project:&lt;br /&gt;
&lt;br /&gt;
*Issue1: Send new account welcome email to user, when imported from CSV through assignment page.&lt;br /&gt;
*Issue2: Don't send email to reviewers for a new submission after review deadline has passed.&lt;br /&gt;
*Issue3: Adding relevant links to reminder emails.&lt;br /&gt;
&lt;br /&gt;
===Issue 1 - New user email===&lt;br /&gt;
&lt;br /&gt;
'''app/models/assignment_participant.rb'''&lt;br /&gt;
&lt;br /&gt;
* Call method to send mail after user imported successfully.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    password = user.reset_password&lt;br /&gt;
    MailerHelper.send_mail_to_user(user, &amp;quot;Your Expertiza account has been created.&amp;quot;, &amp;quot;user_welcome&amp;quot;, password).deliver&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Issue2 - No submission email to reviewer after deadline===&lt;br /&gt;
*'''Before fixing this issue, we had to write the logic to send emails to reviewers on submissions.'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''app/controllers/submitted_content_controller.rb'''&lt;br /&gt;
*Added the logic to check for last review date to the function '''submit_hyperlink'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
      # get current date and time&lt;br /&gt;
      cur_date = Time.now&lt;br /&gt;
      # get the last due date for the review&lt;br /&gt;
      max_due_date = DueDate.where(parent_id: @participant.assignment.id,deadline_type_id: 2).maximum(&amp;quot;due_at&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
      # send mail only if the last due date has not passed&lt;br /&gt;
      if cur_date &amp;lt;= max_due_date&lt;br /&gt;
        email_submission_reviewers(@participant)&lt;br /&gt;
      end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def email_submission_reviewers(participant)&lt;br /&gt;
    if participant.reviewers.length != 0&lt;br /&gt;
      participant.reviewers.each do |reviewer|&lt;br /&gt;
        rev_res_map = ReviewResponseMap.where(['reviewer_id = ? and reviewee_id = ?', reviewer.id, participant.team.id]).first&lt;br /&gt;
        all_responses = Response.where(:map_id =&amp;gt; rev_res_map.id).order(&amp;quot;updated_at DESC&amp;quot;).first&lt;br /&gt;
&lt;br /&gt;
        user = User.find(reviewer.user_id)&lt;br /&gt;
&lt;br /&gt;
        if user.email_on_submission?&lt;br /&gt;
          MailerHelper.submission_mail_to_reviewr(user,&lt;br /&gt;
                                             &amp;quot;New submission available to review.&amp;quot;,&lt;br /&gt;
                                             &amp;quot;update&amp;quot;).deliver&lt;br /&gt;
        end&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def self.submission_mail_to_reviewr(user, subject, mail_partial)&lt;br /&gt;
    Mailer.notify_reviewer_for_new_submission ({&lt;br /&gt;
        to: user.email,&lt;br /&gt;
        subject: subject,&lt;br /&gt;
        body: {&lt;br /&gt;
            user: user,&lt;br /&gt;
            #first_name: ApplicationHelper.get_user_first_name(user),&lt;br /&gt;
            message: &amp;quot;&amp;quot;,&lt;br /&gt;
            partial_name: mail_partial&lt;br /&gt;
        }&lt;br /&gt;
    })&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  def notify_reviewer_for_new_submission(defn)&lt;br /&gt;
    @partial_name = defn[:body][:partial_name]&lt;br /&gt;
    @user = defn[:body][:user]&lt;br /&gt;
    #@first_name = defn[:body][:first_name]&lt;br /&gt;
    @message = defn[:body][:message]&lt;br /&gt;
    mail(subject: defn[:subject],&lt;br /&gt;
         to: defn[:to])&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;head&amp;gt;&lt;br /&gt;
  &amp;lt;meta content='text/html; charset=UTF-8' http-equiv='Content-Type' /&amp;gt;&lt;br /&gt;
&amp;lt;/head&amp;gt;&lt;br /&gt;
&amp;lt;body&amp;gt;&lt;br /&gt;
&amp;lt;%= render :partial =&amp;gt; 'mailer/partials/'+@partial_name+'_html' %&amp;gt;&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;/body&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Issue3 - Adding relevant links to reminder emails.===&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126039</id>
		<title>CSC/ECE 517 Fall 2019 - E1940. Improving email notification</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126039"/>
		<updated>2019-10-28T22:25:03Z</updated>

		<summary type="html">&lt;p&gt;Atrived: /* E1940 Improving e-mail notification */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=E1940 Improving e-mail notification=&lt;br /&gt;
&lt;br /&gt;
Team members :&lt;br /&gt;
* Adarsh Trivedi (atrived)&lt;br /&gt;
* Dheeraj&lt;br /&gt;
* Prachi&lt;br /&gt;
&lt;br /&gt;
===Brief Introduction===&lt;br /&gt;
 &lt;br /&gt;
* E1940 Improving e-mail notification.&lt;br /&gt;
&lt;br /&gt;
* The forked git repository for this project can be found [https://github.com/dheerajmakam/expertiza.git]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Problem Statement===&lt;br /&gt;
The following tasks were accomplished in this project:&lt;br /&gt;
&lt;br /&gt;
*Issue1: Send new account welcome email to user, when imported from CSV through assignment page.&lt;br /&gt;
*Issue2: Don't send email to reviewers for a new submission after review deadline has passed.&lt;br /&gt;
*Issue3: Adding relevant links to reminder emails.&lt;br /&gt;
&lt;br /&gt;
===Issue 1 - New user email===&lt;br /&gt;
===Issue2 - No submission email to reviewer after deadline===&lt;br /&gt;
===Issue3 - Adding relevant links to reminder emails.===&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126025</id>
		<title>CSC/ECE 517 Fall 2019 - E1940. Improving email notification</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126025"/>
		<updated>2019-10-28T22:12:40Z</updated>

		<summary type="html">&lt;p&gt;Atrived: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=E1940 Improving e-mail notification=&lt;br /&gt;
&lt;br /&gt;
Team members :&lt;br /&gt;
* Adarsh Trivedi (atrived)&lt;br /&gt;
* Dheeraj&lt;br /&gt;
* Prachi&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126023</id>
		<title>CSC/ECE 517 Fall 2019 - E1940. Improving email notification</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019_-_E1940._Improving_email_notification&amp;diff=126023"/>
		<updated>2019-10-28T22:11:27Z</updated>

		<summary type="html">&lt;p&gt;Atrived: Created page with &amp;quot;=E1967 Fix glitches in Author Feedback=  Team members : * Adarsh Trivedi (atrived) * Dheeraj * Prachi&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=E1967 Fix glitches in Author Feedback=&lt;br /&gt;
&lt;br /&gt;
Team members :&lt;br /&gt;
* Adarsh Trivedi (atrived)&lt;br /&gt;
* Dheeraj&lt;br /&gt;
* Prachi&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019&amp;diff=125970</id>
		<title>CSC/ECE 517 Fall 2019</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2019&amp;diff=125970"/>
		<updated>2019-10-28T20:28:24Z</updated>

		<summary type="html">&lt;p&gt;Atrived: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [[CSC/ECE 517 Fall 2019 - Project E1947. Refactor quiz_questionnaire_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - Project E1965. Review report should link to the usual view for reviews]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1972. OSS project J. Skellington: Accessing Assignment Rubrics]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1961. Email notification to reviewers and instructors]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1971. OSS project Finklestein: Instructors &amp;amp; Institutions]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1953. Tagging report for student]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1955.Write  unit tests for student_task.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1954. Auto-generate submission directory names based on assignment names]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1958. Two issues related to assignment management]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1948. Refactor review_mapping_helper.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1959. Intelligent copying of assignments without topics]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1968. Fixes for adding members to teams]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1969. Fixes for reviews not being available]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1951. Remove multiple topics at a time]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1957. Time travel Not Allowed..!!! Restrict TAs’ ability to change their own grade + limit file-size upload]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1963. Changing assignment participant role]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1941. Issues related to topic deadlines]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1966. Tabbed_reviews partial file refactor for displaying the alternate view of reviews]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1962. Email notification upon account creation]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1967. Fix glitches in author feedback]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1960. Create new late policy successfully and fixing &amp;quot;Back&amp;quot; link]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1939. OSS Project Juniper: Bookmark enhancements]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - M1950. Support Asynchronous Web Assembly Compilation]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1938. OSS project Duke Blue: Fix import glitches]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - M1951. Implement missing OffscreenCanvas APIs]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2019 - E1940. Improving email notification]]&lt;/div&gt;</summary>
		<author><name>Atrived</name></author>
	</entry>
</feed>