<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jkirsch</id>
	<title>Expertiza_Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jkirsch"/>
	<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=Special:Contributions/Jkirsch"/>
	<updated>2026-05-17T01:47:16Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136820</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136820"/>
		<updated>2020-11-15T17:38:25Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Relevant Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
'''Back-end'''&lt;br /&gt;
&lt;br /&gt;
*app/controllers/grades_controller.rb&lt;br /&gt;
*app/helpers/grades_helper.rb&lt;br /&gt;
*app/models/assignment_participant.rb&lt;br /&gt;
*app/models/author_feedback_questionnaire.rb&lt;br /&gt;
*app/models/response_map.rb &lt;br /&gt;
*app/models/review_questionnaire.rb&lt;br /&gt;
*app/models/self_review_response_map.rb&lt;br /&gt;
*app/models/teammate_review_questionnaire.rb&lt;br /&gt;
*app/models/vm_question_response.rb&lt;br /&gt;
&lt;br /&gt;
'''Front-end'''&lt;br /&gt;
&lt;br /&gt;
*app/views/assignments/edit/_review_strategy.html.erb&lt;br /&gt;
*app/views/grades/_participant.html.erb&lt;br /&gt;
*app/views/grades/_participant_charts.html.erb &lt;br /&gt;
*app/views/grades/_participant_title.html.erb&lt;br /&gt;
*app/views/grades/view_team.html.erb&lt;br /&gt;
&lt;br /&gt;
'''Testing'''&lt;br /&gt;
&lt;br /&gt;
*spec/models/assignment_particpant_spec.rb&lt;br /&gt;
*spec/controllers/grades_controller_spec.rb&lt;br /&gt;
&lt;br /&gt;
=== Use Case for Self-Assessment ===&lt;br /&gt;
&lt;br /&gt;
The following diagram illustrates how the self-assessment feature should work between student and instructor within Expertiza. In summary, an instructor can create an assignment and enable self-review. A student can then submit to the assignment creates by the instructor and provide a self-review. Only then can the student view all of their review scores.&lt;br /&gt;
&lt;br /&gt;
[[File:Use case diagram e2078.png|750px]]&lt;br /&gt;
&lt;br /&gt;
== Design Plan for Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. Currently implemented in the system, the avg column takes an average of all the review scores. These scores (peer-reviews average and self-review) will be used to determine the overall composite score for the team's reviews. Furthermore, the average composite score is displayed on the page under the average peer review score, labeled &amp;quot;Final Average Peer Review Score&amp;quot;. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe v3.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is additional columns for the self-review average and for the composite (final) score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe v3.png|1150px]]&lt;br /&gt;
&lt;br /&gt;
''Note: The diagrams above are wireframes. The values displayed are not be taken literally, they are for design purposes only.''&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follows: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is  pseudo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
:1. if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
:2. if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
:3. if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We will remove most of the code from the previous implementation since the formula/method used is unsatisfactory. '''With the new grading formula, the grades_controller can assign a final grade by following these steps:'''&lt;br /&gt;
&lt;br /&gt;
:1. We can obtain the peer review ratings and the score/grade derived from them by calling scores(), which then calls compute_assignment_score(), both from the assignment_participant.rb model. In order to compute the assignment score, compute_assignment_score() calls another method named get_assessments_for(), which is located in review_questionnare.rb model.&lt;br /&gt;
&lt;br /&gt;
:2. The get_assessments_for() method will call the reviews() method, also located in assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
:3. Finally, the reviews() method will get the scores by simply calling the get_assessments_for() method located in the response_map.rb model. &lt;br /&gt;
&lt;br /&gt;
:4. Once the scores have been retrieved by using the various model methods, the controller can use the scores to calculated a final grade by using the formula. This grade is then passed to the view.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Below is a flow diagram for how grades_controller.rb, which implements the grading formula (as mentioned in the first step) and presents the grade in the view (top of the diagram). Note: The self-review scores are obtained by using the true parameter in all the methods calls (as shown in the diagram), whereas the peer-review scores are retrieved similarly but omitting this parameter. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:E1926_code_flow.png]]&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
Below is a control flow diagram for how a student will be able to view their peer and self review score.&lt;br /&gt;
&lt;br /&gt;
[[File:Control_Flow_SelfReview.png]]&lt;br /&gt;
&lt;br /&gt;
== Implementation of Design Plan  ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews ===&lt;br /&gt;
&lt;br /&gt;
The following code and UI screenshots illustrate the implementation of displaying the self-review scores alongside the peer-reviews. This also includes displaying the final score that aggregates the self-review score utilizing a chosen formula. The final score derivation process is described in the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score] task.&lt;br /&gt;
&lt;br /&gt;
'''View Team Review Scores (view_team)'''&lt;br /&gt;
&lt;br /&gt;
The view_team.html.erb file is responsible for the display of the peer-review scores heat map. In this implementation, we added a display for the Final Average Peer Review Score, which is the score that takes the average self-review score into consideration based on a set formula. Additionally, there is now a self-review score column (highlighted in cyan) that displays the self-review score for each criterion alongside the peer-review given.&lt;br /&gt;
&lt;br /&gt;
[[File:selfreviews16.png|1200px]]&lt;br /&gt;
[[File:selfreviews17.png|1200px]]&lt;br /&gt;
[[File:selfreviews18.png|1200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following screenshot shows the final result of the code implementation.&lt;br /&gt;
&lt;br /&gt;
[[File:Selfreview final screenshot2.png|1150px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Alternate View (view_my_scores)'''&lt;br /&gt;
&lt;br /&gt;
The _participant_*.html.erb files are responsible for the UI display of viewing assignment scores in the Alternate View. The following implementation shows the addition of the new columns for self-review score and the final composite score (along with a doughnut chart for the final score).&lt;br /&gt;
&lt;br /&gt;
[[File:selfreviews14.png|1200px]]&lt;br /&gt;
[[File:selfreviews15.png|1200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following screenshot shows the final result of the code implementation.&lt;br /&gt;
&lt;br /&gt;
[[File:Selfreview final screenshot1.png|1150px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Response Map for Reviews'''&lt;br /&gt;
&lt;br /&gt;
The following code implementations illustrate the response map additions for the response mapping. This is responsible for the functionality of conducting a self-review and gathering the results per assignment.&lt;br /&gt;
&lt;br /&gt;
[[File:selfreviews9.png|1200px]]&lt;br /&gt;
[[File:selfreviews10.png|1200px]]&lt;br /&gt;
[[File:selfreviews11.png|1200px]]&lt;br /&gt;
[[File:selfreviews12.png|1200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
In the following screenshots we have implemented code in the grades_controller.rb to call upon helper functions in the grades_helper.rb that will compute the peer and self final score. The formula that will be chosen depends on which formula the instructor selects when they create the assignment.&lt;br /&gt;
[[File:selfreviews1.png|1200px]]&lt;br /&gt;
[[File:selfreviews2.png|1200px]]&lt;br /&gt;
[[File:selfreviews3.png|1200px]]&lt;br /&gt;
[[File:selfreviews4.png|1200px]]&lt;br /&gt;
&lt;br /&gt;
'''Instructor View for Reviews'''&lt;br /&gt;
&lt;br /&gt;
The following screenshot is the code that we implemented to allow a teacher to a which formula, in the UI, to use to account for self-reviews in the assignment review grading.&lt;br /&gt;
&lt;br /&gt;
[[File:selfreviews13.png|1200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The screenshot of the interface below is the resulting UI from the code implementation above.&lt;br /&gt;
&lt;br /&gt;
[[File:selfreviews19.png|600px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
The following screenshots are the files which we implemented booleans so that a student will not be able to see their scores unless they have filled out their self reviews.&lt;br /&gt;
&lt;br /&gt;
[[File:selfreviews5.png|1200px]]&lt;br /&gt;
[[File:selfreviews6.png|1200px]]&lt;br /&gt;
[[File:selfreviews7.png|1200px]]&lt;br /&gt;
[[File:selfreviews8.png|1200px]]&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
'''Credentials'''&lt;br /&gt;
&lt;br /&gt;
*username: ''instructor6'', password: ''password''&lt;br /&gt;
*username: ''student3000'', password: ''password''&lt;br /&gt;
*username: ''student4000'', password: ''password''&lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor6''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews. To allow for self-reviews, check the box &amp;quot;Allow Self-Reviews&amp;quot; in the Review Strategy tab.&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add ''student3000'' and ''student4000'' to the newly created assignment (i.e. &amp;quot;Test Assignment&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Sign as ''student3000'' &lt;br /&gt;
::1.1. Submit any file or link to the new Test Assignment&lt;br /&gt;
:2. Log back in as the ''instructor6''&lt;br /&gt;
::2.1. Edit the Test Assignment to change the submission date to be in the past, enabling peer reviews&lt;br /&gt;
:3. Log in as ''student3000''&lt;br /&gt;
:4. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the Test Assignment. This button should be disabled&lt;br /&gt;
:5. Log back in as the ''instructor6''&lt;br /&gt;
::5.1. Edit the Test Assignment to change back the submission date to be in the future&lt;br /&gt;
:6. Log in as ''student3000'&lt;br /&gt;
::6.1. Perform a self-review&lt;br /&gt;
:7. Log back in as the ''instructor6''&lt;br /&gt;
::7.1. Edit the Test Assignment to change the submission date to be in the past, enabling peer reviews&lt;br /&gt;
:8. Log in as ''student3000'&lt;br /&gt;
::8.1. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Sign as ''student3000'' &lt;br /&gt;
::1.1. Submit any file or link to the new Test Assignment&lt;br /&gt;
::1.2. Perform a self-review&lt;br /&gt;
:2. Repeat step 1 ''student4000''&lt;br /&gt;
:3. Log back in as the ''instructor6''&lt;br /&gt;
::3.1. Edit the Test Assignment to change the submission date to be in the past, enabling peer reviews&lt;br /&gt;
:4. Log back in as ''student4000''&lt;br /&gt;
::4.1. Perform a peer-review on the submission from ''student3000''&lt;br /&gt;
:5. Log in as ''student3000''&lt;br /&gt;
:6. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;.&lt;br /&gt;
::6.1. Assure that there is a column for self-review scores&lt;br /&gt;
::6.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:7. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::7.1. Confirm that there is a new column illustrating the self-review average&lt;br /&gt;
::7.2. Confirm that there is a column in the grades table displaying composite score (Final score)&lt;br /&gt;
::7.3. Check is there is a doughnut chart displaying the composite score (Final score)&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
We plan to implement new RSpec tests to verify our implementations of the composite score calculation and the requirement to self-review first. Once written, we will be able to go more in-depth on the details of our testing. &lt;br /&gt;
&lt;br /&gt;
==== Test Composite Score Derivation ====&lt;br /&gt;
The following screenshot is an rspec test that we added in the grades_helper_spec.rb file. This tested to see if the Vossen formula that we implemented would output the correct score when given a average peer, average self review score, weight, and leniency. &lt;br /&gt;
[[File:selfreviews20.png|1200px]]&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': https://github.com/expertiza/expertiza/pull/1831&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': https://www.youtube.com/watch?v=BYnhUNOTejs&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;br /&gt;
&lt;br /&gt;
[https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades E1984 wiki]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=How_Self_Reviews_are_combined_with_Peer_Reviews_for_Final_Grade&amp;diff=136735</id>
		<title>How Self Reviews are combined with Peer Reviews for Final Grade</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=How_Self_Reviews_are_combined_with_Peer_Reviews_for_Final_Grade&amp;diff=136735"/>
		<updated>2020-11-12T22:35:21Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Information on Vossen Formula:&lt;br /&gt;
   grade = w * (avg_peer_review_score) + (1 - w) * (avg_peer_review_score * (1 +/- ((avg_peer_review_score - self_review_score).abs() / avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
More fully, as in the code:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The formula that determines a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. The function is the deviation (in percentage) of the self-review score from the average peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation). l is not able to be chosen from the UI - it is hard coded into Expertiza because it depends on the assignment grading scale. The Expertiza review scale is out of 5, so l = 0.25 because the minimum deviation is: 1 / (max score -1) (and max score is 5).&lt;br /&gt;
&lt;br /&gt;
The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using an example where the average peer review score is 4/5, the self review score is 5/5, and w = 0.95, the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency 0.25 (the required minimum deviation for a grading scale out of 5):&lt;br /&gt;
&lt;br /&gt;
A 25% deviation is sufficiently small (it is the minimum deviation) to warrant increasing the final grade by (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
If the deviation were larger (self-review score was 2/5, 1/5, or 0/5 while self-review score was still 4/5), the formula would simply apply the 'else' statement instead of the 'if' statement since the deviation would be greater than 25%.&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=How_Self_Reviews_are_combined_with_Peer_Reviews_for_Final_Grade&amp;diff=136734</id>
		<title>How Self Reviews are combined with Peer Reviews for Final Grade</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=How_Self_Reviews_are_combined_with_Peer_Reviews_for_Final_Grade&amp;diff=136734"/>
		<updated>2020-11-12T22:33:37Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Information on Vossen Formula:&lt;br /&gt;
   grade = w * (avg_peer_review_score) + (1 - w) * (avg_peer_review_score * (1 +/- ((avg_peer_review_score - self_review_score).abs() / avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
More fully, as in the code:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The formula that determines a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. The function is the deviation (in percentage) of the self-review score from the average peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation). l is not able to be chosen from the UI - it is hard coded into Expertiza because it depends on the assignment grading scale. The Expertiza review scale is out of 5, so l = 0.25 because the minimum deviation is: 1 / (max score -1) (and max score is 5).&lt;br /&gt;
&lt;br /&gt;
The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using an example where the average peer review score is 4/5, the self review score is 5/5, and w = 0.95, the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency 0.25 (the required minimum deviation for a grading scale out of 5):&lt;br /&gt;
&lt;br /&gt;
A 25% deviation is sufficiently small (it is the minimum deviation) to warrant keeping increasing the final grade by (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
if the deviation were larger (3/5, 2/5, etc for the peer-review score, the formula would simply apply the 'else' statement instead of the 'if' statement.&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=How_Self_Reviews_are_combined_with_Peer_Reviews_for_Final_Grade&amp;diff=136657</id>
		<title>How Self Reviews are combined with Peer Reviews for Final Grade</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=How_Self_Reviews_are_combined_with_Peer_Reviews_for_Final_Grade&amp;diff=136657"/>
		<updated>2020-11-05T21:48:34Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Edit me later&lt;br /&gt;
&lt;br /&gt;
Vossen Formula is: grade = w * (avg_peer_review_score) + (1 - w) * (avg_peer_review_score * (1 +/- ((avg_peer_review_score - self_review_score).abs() / avg_peer_review_score)))&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=How_Self_Reviews_are_combined_with_Peer_Reviews_for_Final_Grade&amp;diff=136653</id>
		<title>How Self Reviews are combined with Peer Reviews for Final Grade</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=How_Self_Reviews_are_combined_with_Peer_Reviews_for_Final_Grade&amp;diff=136653"/>
		<updated>2020-11-05T21:02:08Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Edit me later&lt;br /&gt;
&lt;br /&gt;
Formula 1 is: grade = w * (avg_peer_review_score) + (1 - w) * (avg_peer_review_score * (1 +/- ((avg_peer_review_score - self_review_score).abs() / avg_peer_review_score)))&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=How_Self_Reviews_are_combined_with_Peer_Reviews_for_Final_Grade&amp;diff=136652</id>
		<title>How Self Reviews are combined with Peer Reviews for Final Grade</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=How_Self_Reviews_are_combined_with_Peer_Reviews_for_Final_Grade&amp;diff=136652"/>
		<updated>2020-11-05T21:01:04Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: Created page with &amp;quot;Edit me later&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Edit me later&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=Grading&amp;diff=136651</id>
		<title>Grading</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=Grading&amp;diff=136651"/>
		<updated>2020-11-05T21:00:51Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[How Review Grades are Calculated]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[How Self Reviews are combined with Peer Reviews for Final Grade]]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=Grading&amp;diff=136650</id>
		<title>Grading</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=Grading&amp;diff=136650"/>
		<updated>2020-11-05T21:00:33Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[How Review Grades are Calculated]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[How Self Reviews are combinednwith Peer Reviews for Final Grade]]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136286</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136286"/>
		<updated>2020-10-27T20:16:33Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Implementation Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follows: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We will remove most of the code from the previous implementation since the formula/method used is unsatisfactory. '''With the new grading formula, the grades_controller can assign a final grade by following these steps:'''&lt;br /&gt;
&lt;br /&gt;
1) We can obtain the peer review ratings and the score/grade derived from them by calling scores(), which then calls compute_assignment_score(), both from the assignment_participant.rb model. In order to compute the assignment score, compute_assignment_score() calls another method named get_assessments_for(), which is located in review_questionnare.rb model.&lt;br /&gt;
&lt;br /&gt;
2) The get_assessments_for() method will call the reviews() method, also located in assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
3) Finally, the reviews() method will get the scores by simply calling the get_assessments_for() method located in the response_map.rb model. &lt;br /&gt;
&lt;br /&gt;
4) Once the scores have been retreived by using the various model methods, the controller can use the scores to calculated a final grade by using the formula. This grade is then passed to the view.&lt;br /&gt;
&lt;br /&gt;
Below is a flow diagram for how grades_controller.rb, which implements the grading formula (as mentioned in the first step) and presents the grade in the view (top of the diagram). Note: The self-review scores are obtained by using the true parameter in all the methods calls (as shown in the diagram), whereas the peer-review scores are retrieved similarly but omitting this parameter. &lt;br /&gt;
&lt;br /&gt;
[[File:E1926_code_flow.png]]&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
We plan to implement new RSpec tests to verify our implementations of the composite score calculation and the requirement to self-review first. Once written, we will be able to go more in-depth on the details of our testing. &lt;br /&gt;
&lt;br /&gt;
==== Test Composite Score Derivation ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==== Test Requirement to Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;br /&gt;
&lt;br /&gt;
[https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades E1984 wiki]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136285</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136285"/>
		<updated>2020-10-27T20:15:13Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Implementation Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follows: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We will remove most of the code from the previous implementation since the formula/method used is unsatisfactory. '''With the new grading formula, the grades_controller can assign a final grade by following these steps:'''&lt;br /&gt;
&lt;br /&gt;
1) We can obtain the peer review ratings and the score/grade derived from them by calling scores(), which then calls compute_assignment_score(), both from the assignment_participant.rb model. In order to compute the assignment score, compute_assignment_score() calls another method named get_assessments_for(), which is located in review_questionnare.rb model.&lt;br /&gt;
&lt;br /&gt;
2) The get_assessments_for() method will call the reviews() method, also located in assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
3) Finally, the reviews() method will get the scores by simply calling the get_assessments_for() method located in the response_map.rb model. &lt;br /&gt;
&lt;br /&gt;
4) Once the scores have been retreived by using the various model methods, the controller can use the scores to calculated a final grade by using the formula. This grade is then passed to the view.&lt;br /&gt;
&lt;br /&gt;
Below is a flow diagram for how grades_controller.rb, which implements the grading formula (as mentioned in the first step) and presents the grade in the view (top of the diagram) Note: The self-review scores are obtained by using the true parameter in all the methods calls (as shown in the diagram), whereas the peer-review scores are retrieved similarly but omitting this parameter. &lt;br /&gt;
&lt;br /&gt;
[[File:E1926_code_flow.png]]&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
We plan to implement new RSpec tests to verify our implementations of the composite score calculation and the requirement to self-review first. Once written, we will be able to go more in-depth on the details of our testing. &lt;br /&gt;
&lt;br /&gt;
==== Test Composite Score Derivation ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==== Test Requirement to Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;br /&gt;
&lt;br /&gt;
[https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades E1984 wiki]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136284</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136284"/>
		<updated>2020-10-27T20:14:04Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follows: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We will remove most of the code from the previous implementation since the formula/method used is unsatisfactory. With the new grading formula, the grades_controller can assign a final grade by following these steps:&lt;br /&gt;
&lt;br /&gt;
1) We can obtain the peer review ratings and the score/grade derived from them by calling scores(), which then calls compute_assignment_score(), both from the assignment_participant.rb model. In order to compute the assignment score, compute_assignment_score() calls another method named get_assessments_for(), which is located in review_questionnare.rb model.&lt;br /&gt;
&lt;br /&gt;
2) The get_assessments_for() method will call the reviews() method, also located in assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
3) Finally, the reviews() method will get the scores by simply calling the get_assessments_for() method located in the response_map.rb model. &lt;br /&gt;
&lt;br /&gt;
4) Once the scores have been retreived by using the various model methods, the controller can use the scores to calculated a final grade by using the formula. This grade is then passed to the view.&lt;br /&gt;
&lt;br /&gt;
Below is a flow diagram for how grades_controller.rb, which implements the grading formula (as mentioned in the first step) and presents the grade in the view (top of the diagram) Note: The self-review scores are obtained by using the true parameter in all the methods calls (as shown in the diagram), whereas the peer-review scores are retrieved similarly but omitting this parameter. &lt;br /&gt;
&lt;br /&gt;
[[File:E1926_code_flow.png]]&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
We plan to implement new RSpec tests to verify our implementations of the composite score calculation and the requirement to self-review first. Once written, we will be able to go more in-depth on the details of our testing. &lt;br /&gt;
&lt;br /&gt;
==== Test Composite Score Derivation ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==== Test Requirement to Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;br /&gt;
&lt;br /&gt;
[https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades E1984 wiki]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136283</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136283"/>
		<updated>2020-10-27T20:13:23Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Implementation Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follows: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We will remove most of the code from the previous implementation since the formula/method used is unsatisfactory. With the new grading formula, the grades_controller can assign a final grade by following these steps:&lt;br /&gt;
&lt;br /&gt;
1) We can obtain the peer review ratings and the score/grade derived from them by calling scores(), which then calls compute_assignment_score(), both from the assignment_participant.rb model. In order to compute the assignment score, compute_assignment_score() calls another method named get_assessments_for(), which is located in review_questionnare.rb model.&lt;br /&gt;
&lt;br /&gt;
2) The get_assessments_for() method will call the reviews() method, also located in assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
3) Finally, the reviews() method will get the scores by simply calling the get_assessments_for() method located in the response_map.rb model. &lt;br /&gt;
&lt;br /&gt;
4) Once the scores have been retreived by using the various model methods, the controller can use the scores to calculated a final grade by using the formula. This grade is then passed to the view.&lt;br /&gt;
&lt;br /&gt;
Below is a flow diagram for how grades_controller.rb, which implements the grading formula (as mentioned in the first step) and presents the grade in the view (top of the diagram) Note: The self-review scores are obtained by using the true parameter in all the methods calls (as shown in the diagram), whereas the peer-review scores are retrieved similarly but omitting this parameter. &lt;br /&gt;
&lt;br /&gt;
[[File:E1926_code_flow.png]]&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
We plan to implement new RSpec tests to verify our implementations of the composite score calculation and the requirement to self-review first. Once written, we will be able to go more in-depth on the details of our testing. &lt;br /&gt;
&lt;br /&gt;
==== Test Composite Score Derivation ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==== Test Requirement to Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;br /&gt;
&lt;br /&gt;
[https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades E1984 wiki]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136282</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136282"/>
		<updated>2020-10-27T20:11:25Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Implementation Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follows: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We will remove most of the code from the previous implementation since the formula/method used is unsatisfactory. With the new grading formula, the grades_controller can assign a final grade by following these steps:&lt;br /&gt;
&lt;br /&gt;
1. We can obtain the peer review ratings and the score/grade derived from them by calling scores(), which then calls compute_assignment_score(), both from the assignment_participant.rb model. In order to compute the assignment score, compute_assignment_score calls another method called get_assessments_for(), which is located in review_questionnare.rb model.&lt;br /&gt;
&lt;br /&gt;
2. The get_assessments_for() method will call the reviews() method, also located in assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
3. Finally, the reviews() method will get the scores by simply calling the get_assessments_for() method located in the response_map.rb model. &lt;br /&gt;
&lt;br /&gt;
4. Once the scores have been retreived by using the various model methods, the controller can use the scores to calculated a final grade by using the formula. This grade is then passed to the view.&lt;br /&gt;
&lt;br /&gt;
Below is a flow diagram for how grades controller.rb, which implements the grading formula (as mentioned in the first step) and presents the grade in the view (top of the diagram) Note: The self-review scores are obtained by using the true parameter in all the methods calls (as shown in the diagram), whereas the peer-review scores are retrieved similarly but omitting this parameter. &lt;br /&gt;
&lt;br /&gt;
[[File:E1926_code_flow.png]]&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
We plan to implement new RSpec tests to verify our implementations of the composite score calculation and the requirement to self-review first. Once written, we will be able to go more in-depth on the details of our testing. &lt;br /&gt;
&lt;br /&gt;
==== Test Composite Score Derivation ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==== Test Requirement to Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;br /&gt;
&lt;br /&gt;
[https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades E1984 wiki]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136281</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136281"/>
		<updated>2020-10-27T20:09:42Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Implementation Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follows: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We will remove most of the code from the previous implementations since the formula/method used is unsatisfactory. With the new grading formula, the grades_controller can assign a final grade by following these steps:&lt;br /&gt;
&lt;br /&gt;
1. We can obtain the peer review ratings and the score/grade derived from them by calling scores(), which then calls compute_assignment_score(), both from the assignment_participant.rb model. In order to compute the assignment score, compute_assignment_score calls another method called get_assessments_for(), which is located in review_questionnare.rb model.&lt;br /&gt;
2. The get_assessments_for() method will call the reviews() method, also located in assignment_participant.rb.&lt;br /&gt;
3. Finally, the reviews() method will get the scores by simply calling the get_assessments_for() method located in the response_map.rb model. &lt;br /&gt;
4. Once the scores have been retreived by using the various model methods, the controller can use the scores to calculated a final grade by using the formula. This grade is then passed to the view.&lt;br /&gt;
&lt;br /&gt;
Below is a flow diagram for how grades controller.rb, which implements the grading formula (as mentioned in the first step) and presents the grade in the view (top of the diagram) Note: The self-review scores are obtained by using the true parameter in all the methods calls (as shown in the diagram), whereas the peer-review scores are retrieved similarly but omitting this parameter. &lt;br /&gt;
&lt;br /&gt;
[[File:E1926_code_flow.png]]&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
We plan to implement new RSpec tests to verify our implementations of the composite score calculation and the requirement to self-review first. Once written, we will be able to go more in-depth on the details of our testing. &lt;br /&gt;
&lt;br /&gt;
==== Test Composite Score Derivation ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==== Test Requirement to Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;br /&gt;
&lt;br /&gt;
[https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades E1984 wiki]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136278</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136278"/>
		<updated>2020-10-27T19:50:00Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Implementation Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follows: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
We plan to implement new RSpec tests to verify our implementations of the composite score calculation and the requirement to self-review first. Once written, we will be able to go more in-depth on the details of our testing. &lt;br /&gt;
&lt;br /&gt;
==== Test Composite Score Derivation ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==== Test Requirement to Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;br /&gt;
&lt;br /&gt;
[https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades E1984 wiki]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136276</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136276"/>
		<updated>2020-10-27T19:49:25Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Implementation Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
We plan to implement new RSpec tests to verify our implementations of the composite score calculation and the requirement to self-review first. Once written, we will be able to go more in-depth on the details of our testing. &lt;br /&gt;
&lt;br /&gt;
==== Test Composite Score Derivation ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==== Test Requirement to Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;br /&gt;
&lt;br /&gt;
[https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades E1984 wiki]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136274</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136274"/>
		<updated>2020-10-27T19:47:09Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
We plan to implement new RSpec tests to verify our implementations of the composite score calculation and the requirement to self-review first. Once written, we will be able to go more in-depth on the details of our testing. &lt;br /&gt;
&lt;br /&gt;
==== Test Composite Score Derivation ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==== Test Requirement to Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;br /&gt;
&lt;br /&gt;
[https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades E1984 wiki]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136272</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=136272"/>
		<updated>2020-10-27T19:46:28Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
We plan to implement new RSpec tests to verify our implementations of the composite score calculation and the requirement to self-review first. Once written, we will be able to go more in-depth on the details of our testing. &lt;br /&gt;
&lt;br /&gt;
==== Test Composite Score Derivation ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==== Test Requirement to Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;br /&gt;
&lt;br /&gt;
[https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades Previous implementation's wiki]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135924</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135924"/>
		<updated>2020-10-21T20:37:09Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Implementation Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
We plan to implement new RSpec tests to verify our implementations of the composite score calculation and the requirement to self-review first. Once written, we will be able to go more in-depth on the details of our testing. &lt;br /&gt;
&lt;br /&gt;
==== Test Composite Score Derivation ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==== Test Requirement to Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135923</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135923"/>
		<updated>2020-10-21T20:31:40Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Implementation Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). &lt;br /&gt;
&lt;br /&gt;
To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
We plan to implement new RSpec tests to verify our implementations of the composite score calculation and the requirement to self-review first. Once written, we will be able to go more in-depth on the details of our testing. &lt;br /&gt;
&lt;br /&gt;
==== Test Composite Score Derivation ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
==== Test Requirement to Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135900</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135900"/>
		<updated>2020-10-21T14:30:10Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Implementation Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135899</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135899"/>
		<updated>2020-10-21T14:29:16Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Implementation Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135897</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135897"/>
		<updated>2020-10-21T14:27:11Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Implementation Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
==== Design Plan ====&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb.&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
==== Implementation Plan ====&lt;br /&gt;
By being able to self review before peer review, it allows the author of the assignment to have an unbiased opinion on the quality of work they are submitting. When the judgement of their work is not influenced by others who have given feedback, they are able to get a clearer view of the strengths and weaknesses of their assignment. Self reviews before peer reviews can also be more beneficial to the user as it can show if the user has the correct or wrong approach to their solution compared to their peers. &lt;br /&gt;
&lt;br /&gt;
In the current implementation of self review, the user is able to see the peer reviews before they have made their own self review. The reason that this is occurring is due to the fact that when the user goes to see their scores, the page is not checking that a self review has been submitted. In order to fix this issue, we will add a boolean parameter to self review and pass it to viewing pages where it is called. When a user is at the student tasks view, the  &amp;quot;Your scores&amp;quot; link will be disabled if the user has not filled out their self review. If the user has filled out their self review, then he/she will be redirected to the [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Display_Self-Review_Scores_w.2F_Peer-Reviews Display Self-Review Scores with Peer-Reviews] page.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
Our project will utilize various testing techniques. These methods of testing involve manual testing (black box testing) and RSpec testing (white box testing). &lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
The steps outlined for manual testing will become clearer upon implementation, but the proposed plan is the following:&lt;br /&gt;
&lt;br /&gt;
==== Prerequisite Steps ====&lt;br /&gt;
&lt;br /&gt;
:1. Log in to the development instance of Expertiza as an ''instructor''&lt;br /&gt;
:2. Create an assignment that allows for self-reviews&lt;br /&gt;
::2.1. Make sure the submission deadline is after the current date and time&lt;br /&gt;
::2.2. Likewise, the review deadline should then be greater than the submission deadline&lt;br /&gt;
:3. Add two students to the newly created assignment (theoretically ''student1'' and ''student2'')&lt;br /&gt;
:4. Sign as ''student1'' &lt;br /&gt;
::4.1. Submit any file or link to the new assignment&lt;br /&gt;
:5. Repeat step 4 ''student2''&lt;br /&gt;
:6. Log back in as the ''instructor''&lt;br /&gt;
::6.1. Alter the assignments submission date to be in the past, enabling peer reviews&lt;br /&gt;
:7. Log back in as ''student2''&lt;br /&gt;
::7.1. Perform a peer-review on the submission from ''student1''&lt;br /&gt;
&lt;br /&gt;
==== Must Review Self before Viewing Peer Reviews ====&lt;br /&gt;
&lt;br /&gt;
This test will assure that the logged in user cannot view their peer-reviews for a given assignment unless they have performed a self-review.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Attempt to view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot; within the assignment. This button should be disabled&lt;br /&gt;
:3. Perform a self-review&lt;br /&gt;
:4. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. This button should now be enabled&lt;br /&gt;
&lt;br /&gt;
==== Viewing Self Review Score Juxtaposed with Peer Review Scores ====&lt;br /&gt;
&lt;br /&gt;
This test confirms that the the students self-review scores are displayed with peer-review scores. It additionally confirms that self-review scores are considered in the review grading with the calculation of a composite score.&lt;br /&gt;
&lt;br /&gt;
:1. Log in as ''student1''&lt;br /&gt;
:2. Go view peer-review scores by clicking on &amp;quot;Your Scores&amp;quot;. If this button is disabled, user must self-review&lt;br /&gt;
::2.1. Assure that there is a column for self-review scores&lt;br /&gt;
::2.2. Confirm there is a composite score calculation underneath the average peer review score.&lt;br /&gt;
:3. Go back to the assignment view. Click &amp;quot;Alternate View&amp;quot;&lt;br /&gt;
::3.1. Confirm that there is a column in the grades table displaying composite score&lt;br /&gt;
::3.2. Check is there is a doughnut chart displaying the composite score&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135792</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135792"/>
		<updated>2020-10-20T16:32:13Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
'''Design Plan'''&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb. &lt;br /&gt;
** Ask for Dr G's input about Expertiza's file structure/design and if the previous group got it correct. (Their diagram is hard to follow without a thorough background.) **&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135791</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135791"/>
		<updated>2020-10-20T16:31:47Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
'''Design Plan'''&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score (5/5) differs by 25% of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb. &lt;br /&gt;
** Ask for Dr G's input about Expertiza's file structure/design and if the previous group got it correct. (Their diagram is hard to follow without a thorough background.) **&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135790</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135790"/>
		<updated>2020-10-20T16:29:23Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
'''Design Plan'''&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches this threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small. Notice that the leniency condition, the instructor's desired percentage ''of the deviation of self review from peer review'', is naturally part of the grading formula in SELF:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score different by 25% (1/5) of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb. &lt;br /&gt;
** Ask for Dr G's input about Expertiza's file structure/design and if the previous group got it correct. (Their diagram is hard to follow without a thorough background.) **&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135789</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135789"/>
		<updated>2020-10-20T16:26:38Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
'''Design Plan'''&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''.&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches tthis threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score different by 25% (1/5) of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb. &lt;br /&gt;
** Ask for Dr G's input about Expertiza's file structure/design and if the previous group got it correct. (Their diagram is hard to follow without a thorough background.) **&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135788</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135788"/>
		<updated>2020-10-20T16:25:56Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
'''Design Plan'''&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4/5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches tthis threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score different by 25% (1/5) of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb. &lt;br /&gt;
** Ask for Dr G's input about Expertiza's file structure/design and if the previous group got it correct. (Their diagram is hard to follow without a thorough background.) **&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135787</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135787"/>
		<updated>2020-10-20T16:25:04Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
'''Design Plan'''&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4.5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches tthis threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition. instead of assigning a final grade equal to the avg_peer_review_score if the leniency condition is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews. The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...):&lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo-code is: &lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''+''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
   else&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 '''-''' (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5, w = 0.95), the self review score different by 25% (1/5) of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency, the instructor can decide:&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF) (so that the final grade is '''79%''', instead of 80%).&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation (so that the grade is '''80%''')&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ... (so that the grade is '''81%''')&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb. &lt;br /&gt;
** Ask for Dr G's input about Expertiza's file structure/design and if the previous group got it correct. (Their diagram is hard to follow without a thorough background.) **&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135786</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135786"/>
		<updated>2020-10-20T16:19:59Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
'''Design Plan'''&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4.5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches tthis threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if an instructor wishes to not subtract from the final grade if the deviation is sufficiently small:&lt;br /&gt;
   if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
      grade = avg_peer_review_score&lt;br /&gt;
   else&lt;br /&gt;
      run formula&lt;br /&gt;
&lt;br /&gt;
We see that &lt;br /&gt;
&lt;br /&gt;
As mentioned, instead of assigning grade to the avg_peer_review score if the leniency conditions is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews). The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...)&lt;br /&gt;
&lt;br /&gt;
function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
	grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where SELF = avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo code is: &lt;br /&gt;
&lt;br /&gt;
grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
else&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5), the self review score different by 25% (1/5) of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency - the instructor can decided&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF)&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb. &lt;br /&gt;
** Ask for Dr G's input about Expertiza's file structure/design and if the previous group got it correct. (Their diagram is hard to follow without a thorough background.) **&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135785</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135785"/>
		<updated>2020-10-20T16:18:31Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
'''Design Plan'''&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4.5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches tthis threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the instructor can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage (w) ''of the final grade'' to be determined from peer reviews and, conversely, the instructor's desired percentage (1-w) ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency, should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if a instructor wishes to not subtract from the final grade if the deviation is sufficiently small:&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = avg_peer_review_score&lt;br /&gt;
else&lt;br /&gt;
   run formula&lt;br /&gt;
&lt;br /&gt;
We see that &lt;br /&gt;
&lt;br /&gt;
As mentioned, instead of assigning grade to the avg_peer_review score if the leniency conditions is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews). The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...)&lt;br /&gt;
&lt;br /&gt;
function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
	grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where SELF = avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo code is: &lt;br /&gt;
&lt;br /&gt;
grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
else&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5), the self review score different by 25% (1/5) of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency - the instructor can decided&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF)&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb. &lt;br /&gt;
** Ask for Dr G's input about Expertiza's file structure/design and if the previous group got it correct. (Their diagram is hard to follow without a thorough background.) **&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135784</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135784"/>
		<updated>2020-10-20T16:15:31Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
'''Design Plan'''&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4.5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency, as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the final grade will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches tthis threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the TA can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage ''of the final grade'' to be determined from peer reviews and the instructor's desired percentage ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency - should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction resulting from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if a instructor wishes to not subtract from the final grade if the deviation is sufficiently small:&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = avg_peer_review_score&lt;br /&gt;
else&lt;br /&gt;
   run formula&lt;br /&gt;
&lt;br /&gt;
We see that &lt;br /&gt;
&lt;br /&gt;
As mentioned, instead of assigning grade to the avg_peer_review score if the leniency conditions is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews). The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...)&lt;br /&gt;
&lt;br /&gt;
function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
	grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where SELF = avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo code is: &lt;br /&gt;
&lt;br /&gt;
grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
else&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5), the self review score different by 25% (1/5) of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency - the instructor can decided&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF)&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb. &lt;br /&gt;
** Ask for Dr G's input about Expertiza's file structure/design and if the previous group got it correct. (Their diagram is hard to follow without a thorough background.) **&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135783</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135783"/>
		<updated>2020-10-20T16:13:27Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
'''Design Plan'''&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* The average peer review score is 4/5, the self review score is 5/5.&lt;br /&gt;
* The instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4.5 ('''80%''') is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = '''79%'''&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is '''77%''' (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency - as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the grade final will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches a certain threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the TA can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage ''of the final grade'' to be determined from peer reviews and the instructor's desired percentage ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency - should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction resulting from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if a instructor wishes to not subtract from the final grade if the deviation is sufficiently small:&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = avg_peer_review_score&lt;br /&gt;
else&lt;br /&gt;
   run formula&lt;br /&gt;
&lt;br /&gt;
We see that &lt;br /&gt;
&lt;br /&gt;
As mentioned, instead of assigning grade to the avg_peer_review score if the leniency conditions is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews). The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...)&lt;br /&gt;
&lt;br /&gt;
function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
	grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where SELF = avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo code is: &lt;br /&gt;
&lt;br /&gt;
grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
else&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5), the self review score different by 25% (1/5) of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency - the instructor can decided&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF)&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb. &lt;br /&gt;
** Ask for Dr G's input about Expertiza's file structure/design and if the previous group got it correct. (Their diagram is hard to follow without a thorough background.) **&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135782</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135782"/>
		<updated>2020-10-20T16:12:04Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
'''Design Plan'''&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores.&lt;br /&gt;
&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* average peer review score is 4/5, self review score is 5/5&lt;br /&gt;
* the instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4.5 (80%) is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = 79%&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is 77% (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency - as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the grade final will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches a certain threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the TA can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage ''of the final grade'' to be determined from peer reviews and the instructor's desired percentage ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency - should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction resulting from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if a instructor wishes to not subtract from the final grade if the deviation is sufficiently small:&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = avg_peer_review_score&lt;br /&gt;
else&lt;br /&gt;
   run formula&lt;br /&gt;
&lt;br /&gt;
We see that &lt;br /&gt;
&lt;br /&gt;
As mentioned, instead of assigning grade to the avg_peer_review score if the leniency conditions is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews). The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...)&lt;br /&gt;
&lt;br /&gt;
function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
	grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where SELF = avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo code is: &lt;br /&gt;
&lt;br /&gt;
grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
else&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5), the self review score different by 25% (1/5) of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency - the instructor can decided&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF)&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb. &lt;br /&gt;
** Ask for Dr G's input about Expertiza's file structure/design and if the previous group got it correct. (Their diagram is hard to follow without a thorough background.) **&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135781</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135781"/>
		<updated>2020-10-20T16:11:37Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
'''Design Plan'''&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores&lt;br /&gt;
and where:&lt;br /&gt;
w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* average peer review score is 4/5, self review score is 5/5&lt;br /&gt;
* the instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4.5 (80%) is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = 79%&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is 77% (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency - as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the grade final will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches a certain threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the TA can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage ''of the final grade'' to be determined from peer reviews and the instructor's desired percentage ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency - should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction resulting from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if a instructor wishes to not subtract from the final grade if the deviation is sufficiently small:&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = avg_peer_review_score&lt;br /&gt;
else&lt;br /&gt;
   run formula&lt;br /&gt;
&lt;br /&gt;
We see that &lt;br /&gt;
&lt;br /&gt;
As mentioned, instead of assigning grade to the avg_peer_review score if the leniency conditions is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews). The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...)&lt;br /&gt;
&lt;br /&gt;
function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
	grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where SELF = avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo code is: &lt;br /&gt;
&lt;br /&gt;
grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
else&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5), the self review score different by 25% (1/5) of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency - the instructor can decided&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF)&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb. &lt;br /&gt;
** Ask for Dr G's input about Expertiza's file structure/design and if the previous group got it correct. (Their diagram is hard to follow without a thorough background.) **&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135780</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135780"/>
		<updated>2020-10-20T16:10:26Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
'''Design Plan'''&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
   function(avg_peer_rev_score, self_rev_score, w)&lt;br /&gt;
      grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
   avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores&lt;br /&gt;
and where:&lt;br /&gt;
   w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* average peer review score is 4/5, self review score is 5/5&lt;br /&gt;
* the instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4.5 (80%) is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = 79%&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is 77% (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency - as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the grade final will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches a certain threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the TA can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage ''of the final grade'' to be determined from peer reviews and the instructor's desired percentage ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency - should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction resulting from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if a instructor wishes to not subtract from the final grade if the deviation is sufficiently small:&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = avg_peer_review_score&lt;br /&gt;
else&lt;br /&gt;
   run formula&lt;br /&gt;
&lt;br /&gt;
We see that &lt;br /&gt;
&lt;br /&gt;
As mentioned, instead of assigning grade to the avg_peer_review score if the leniency conditions is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews). The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...)&lt;br /&gt;
&lt;br /&gt;
function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
	grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where SELF = avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo code is: &lt;br /&gt;
&lt;br /&gt;
grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
else&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5), the self review score different by 25% (1/5) of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency - the instructor can decided&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF)&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb. &lt;br /&gt;
** Ask for Dr G's input about Expertiza's file structure/design and if the previous group got it correct. (Their diagram is hard to follow without a thorough background.) **&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135779</id>
		<title>CSC/ECE 517 Fall 2020 - E2078. Improve self-review Link peer review &amp; self-review to derive grades</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades&amp;diff=135779"/>
		<updated>2020-10-20T16:04:20Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Derive Composite Score */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
In Expertiza, it is currently possible to check the “Allow self-review” box on the Review Strategy tab of assignment creation, and then an author will be asked to review his/her own submission in addition to the submissions of others.  But as currently implemented, nothing is done with the scores on these self-reviews. &lt;br /&gt;
&lt;br /&gt;
There has been a previous attempt at solving this problem, but there were several issues with that implementation:&lt;br /&gt;
&lt;br /&gt;
*The formula for weighting self-reviews is not modular.  It needs to be, since different instructors may want to use different formulas, so several should be supported.&lt;br /&gt;
*There are not enough comments in the code.&lt;br /&gt;
*It seems to work for only one round of review.&lt;br /&gt;
&lt;br /&gt;
View documentation for previous implementation [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2019_-_E1984._Improve_self-review_Link_peer_review_&amp;amp;_self-review_to_derive_grades here].&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
&lt;br /&gt;
*Display the self-review score with peer-review scores for the logged in user&lt;br /&gt;
*Implement a way to achieve a composite score with the combination of the self-review score and peer-review scores&lt;br /&gt;
*Implement a requirement for the logged in user to self-review before viewing peer-reviews&lt;br /&gt;
*Assure that we overcome the issues outlined for the previous implementation of this project&lt;br /&gt;
&lt;br /&gt;
=== Team ===&lt;br /&gt;
&lt;br /&gt;
Courtney Ripoll (ctripoll)&lt;br /&gt;
&lt;br /&gt;
Jonathan Nguyen (jhnguye4)&lt;br /&gt;
&lt;br /&gt;
Justin Kirschner (jkirsch)&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
=== Display Self-Review Scores w/ Peer-Reviews === &lt;br /&gt;
&lt;br /&gt;
It should be possible to see self-review scores juxtaposed with peer-review scores.  Design a way to show them in the regular &amp;quot;View Scores&amp;quot; page and the alternate (heat-map) view.  They should be shown amidst the other reviews, but in a way that highlights them as being a different kind of review.&lt;br /&gt;
&lt;br /&gt;
'''Design Plan'''&lt;br /&gt;
&lt;br /&gt;
In the current implementation of Expertiza, students can view a compilation of all peer review scores for each review question and an average of those peer-reviews. For our project, we plan to add the self-review score alongside the peer-review scores for each review question. In the wire-frame below, note that for each criterion there is a column for each peer-review score and a single column for the self-review score. The avg column then takes an average of all review scores (peer-review and self-review). Additionally, the composite score is displayed on the page under the average peer review score. How we plan to derive a composite score is explained in detail in the follow section [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2078._Improve_self-review_Link_peer_review_%26_self-review_to_derive_grades#Derive_Composite_Score Derive Composite Score].&lt;br /&gt;
&lt;br /&gt;
[[File:View wireframe.png|700px]]&lt;br /&gt;
&lt;br /&gt;
In the alternate view, our plan does not alter the current interface too much. The only addition we plan to implement is an additional column describing the composite score a student has received for an assignment. Likewise, there is now an additional doughnut chart providing a visual of the composite score (green) alongside the final review score (yellow).&lt;br /&gt;
&lt;br /&gt;
[[File:Alternate view wireframe.png|950px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Derive Composite Score ===&lt;br /&gt;
&lt;br /&gt;
Implement a way to combine self-review and peer-review scores to derive a composite score. The basic idea is that the authors get more points as their self-reviews get closer to the scores given by the peer reviewers. So the function should take the scores given by peers to a particular rubric criterion and the score given by the user. The result of the formula should be displayed in a conspicuous page on the score view.&lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
[reference, DOI: 10.21125]&lt;br /&gt;
&lt;br /&gt;
The formula we use to determine a final grade from, 1) the peer reviews and 2) how closely self reviews match the peer reviews, uses a type of additive scoring rule, which computes a weighted average between team score (peer reviews) and student rating (self review). More specifically, it uses a type of mixed additive-multiplicative scoring rule, which multiplies student score (self review) by a function of the team score (peer reviews), and adds its weighted version to the weighted peer review score. This is also known as 'assessment by adjustment'. The formula is a practical scoring rule for additive scoring with unsigned percentages (grades from 0%-100%).&lt;br /&gt;
&lt;br /&gt;
The pseudo-code for a function that implements the formula is as follow: &lt;br /&gt;
&lt;br /&gt;
function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
	grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
and where: &lt;br /&gt;
   avg_peer_review_score is simply the mechanism already existing in Expertiza for assigning a grade from peer review scores&lt;br /&gt;
and where:&lt;br /&gt;
   w - weight - (0 &amp;lt;= w &amp;lt;= 1) is the inverse proportion of how much of the final grade is determined by the closeness of the self review to the average of the peer reviews (w is the proportion of the grade to be determined by the original grade determination: the peer review scores).&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
* average peer review score is 4/5, self review score is 5/5&lt;br /&gt;
* the instructor chooses w to equal 0.95, so that 5% of the grade is determined from the deviation of the self review from the peer reviews.&lt;br /&gt;
The final grade, instead of being the peer review score of 4.5 (80%) is now:&lt;br /&gt;
0.95*(4/5) + 0.05*(4/5*(1-|4/5-5/5|/(4/5))) = 79%&lt;br /&gt;
If the instructor chose w to equal 0.85 (instead of 0.95), the grade is 77% (instead of 79%) because deviation from peer reviews is a larger weighted value of the final grade.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above is a basic version of the grading formula. It is basic in that it only allows deviations of the self review scores from the peer review scores to result in a decrease in the final grade ''and'' a deviation will ''always'' result in a decrease of the final grade. We propose another parameter to the formula, l - leniency - as another way (in addition to w - weight) for the instructor to modularly determine the final grade for an assignment. The parameter l - leniency - can determine a threshold by which the grade final will account/adjust for self reviews' deviations from peer reviews only when the deviation reaches a certain threshold (measured in percentage deviation from the average peer review). If the difference does not meet the threshold, no penalty will be subtracted from the peer review. In addition, if the difference does not meet the threshold (the self review score is sufficiently close to the peer review scores), the TA can choose to add points to final grade based on the magnitude of the difference. Since the formula is a mixed additive-multiplicative scoring rule (mentioned above), the instructor needs to simply pick l - leniency - as a percentage (similar to the functionality of w). To recap: w should be chosen based on the instructor's desired percentage ''of the final grade'' to be determined from peer reviews and the instructor's desired percentage ''of the final grade'' to be determined by the extent to which self reviews deviate from peer reviews. In addition, l - leniency - should be chosen based on the instructor's desired percentage ''of the deviation of self review from peer review'' that could result in no grade deduction resulting from the deviation if the deviation is sufficiently small (or even a grade increase if the instructor wants to increase the score of individuals with a small deviation).&lt;br /&gt;
&lt;br /&gt;
The following is  psuedo-code for if a instructor wishes to not subtract from the final grade if the deviation is sufficiently small:&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = avg_peer_review_score&lt;br /&gt;
else&lt;br /&gt;
   run formula&lt;br /&gt;
&lt;br /&gt;
We see that &lt;br /&gt;
&lt;br /&gt;
As mentioned, instead of assigning grade to the avg_peer_review score if the leniency conditions is met, the grade can be adjusted (increased) if the instructor wishes to do so, since the self review is sufficiently close (determined by l) to the peer reviews). The formula for determining the final grade would thus add the small extent of deviation to the final grade rather than subtracting it (in SELF, 1 - ..., is changed to 1 + ...)&lt;br /&gt;
&lt;br /&gt;
function(avg_peer_rev_score, self_rev_score, w)     &lt;br /&gt;
	grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where SELF = avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this case, the pseudo code is: &lt;br /&gt;
&lt;br /&gt;
grade = w*(avg_peer_rev_score) + (1-w)*(SELF)&lt;br /&gt;
	&lt;br /&gt;
where:&lt;br /&gt;
   SELF = avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if |avg_peer_review_score - self_review_score|/avg_peer_review_score &amp;lt;= l(leniency)&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 + (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
else&lt;br /&gt;
   grade = w*(avg_peer_rev_score) + (1-w)*(avg_peer_review_score * (1 - (|avg_peer_review_score - self_review_score|/avg_peer_review_score))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the previous example (average peer review score is 4/5, self review score is 5/5), the self review score different by 25% (1/5) of the peer review score (4/5). In other words, |avg_peer_review_score - self_review_score|/avg_peer_review_score = 1/4 = 25%. Based on l - leniency - the instructor can decided&lt;br /&gt;
1) if a 25% deviation is sufficiently large to warrant penalizing the final grade by (1-w)*(SELF)&lt;br /&gt;
2) if a 25% deviation is sufficiently small to warrant keeping the final grade as grade = avg_peer_review_score, with no penalty for the deviation&lt;br /&gt;
3) if a 25% deviation is sufficiently small to warrant increasing the final grade by (1-w)*(SELF), where the SELF formula contains a 1 + ..., instead of a 1 - ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to incorporate the combined score into grading, we will change the logic in grades_controller.rb, which implements the grading formula. We can obtain the peer review ratings and the score/grade derived from them  by calling scores() and compute_assignment_score(), respectively, from assignment_participant.rb. &lt;br /&gt;
** Ask for Dr G's input about Expertiza's file structure/design and if the previous group got it correct. (Their diagram is hard to follow without a thorough background.) **&lt;br /&gt;
&lt;br /&gt;
=== Implement Requirement to Review Self before Viewing Peer Reviews ===&lt;br /&gt;
&lt;br /&gt;
There would be no challenge in giving the same self-review scores as the peer reviewers gave if the authors could see peer-review scores before they submitted their self-reviews. The user should be required to submit their self-evaluation(s) before seeing the results of their peer evaluations. &lt;br /&gt;
&lt;br /&gt;
'''Implementation Plan'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Manual Testing === &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RSpec Testing ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Relevant Links ==&lt;br /&gt;
&lt;br /&gt;
'''Our repository''': https://github.com/jhnguye4/expertiza/tree/beta&lt;br /&gt;
&lt;br /&gt;
'''Pull request''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
'''Video demo''': Does not exist yet&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[https://www.researchgate.net/profile/Paul_Vossen/publication/333022439_SCORING_MODELS_FOR_PEER_ASSESSMENT_IN_TEAM-BASED_LEARNING_PROJECTS/links/5cd6d6b4299bf14d958a4b99/SCORING-MODELS-FOR-PEER-ASSESSMENT-IN-TEAM-BASED-LEARNING-PROJECTS.pdf Scoring models for peer assessment in team-based learning projects]&lt;br /&gt;
&lt;br /&gt;
[https://github.com/expertiza/expertiza/pull/1611 E1984 Pull Request]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Quiz_partial.png&amp;diff=134753</id>
		<title>File:Quiz partial.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Quiz_partial.png&amp;diff=134753"/>
		<updated>2020-10-12T22:32:12Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134752</id>
		<title>CSC/ECE 517 Fall 2020 - E2068. Refactor quiz questionnaires controller.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134752"/>
		<updated>2020-10-12T22:31:54Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Code Modifications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
This project contributes to [https://github.com/expertiza/expertiza Expertiza], an open-source project using [http://rubyonrails.org/ Ruby on Rails]. Expertiza is a platform for student learning that encourages active and cooperative learning while discouraging plagiarism. &lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
'''quiz_questionnaires_controller.rb''' is used in Expertiza to handle all functionality related to quizzes. A quiz is a type of questionnaire that allows reviewees to interact with their reviewers and making sure they read the submissions before reviewing. The student creating a quiz is supposed to ask questions related to their work, which, ideally, a reviewer should be able to answer. (If a reviewer cannot answer the questions about the reviewed work, then we might doubt the quality of that reviewer’s review.)  This controller needs some changes as detailed below.&lt;br /&gt;
&lt;br /&gt;
==Issues==&lt;br /&gt;
# Change the way min_question_score and max_question_score are set for @questionnaire on lines 39-40, as well as on lines 53-54.&lt;br /&gt;
#*These statements set the min and max scores to 0 and 1, respectively, regardless of what the user enters, which is not intended.&lt;br /&gt;
#*Change it so that the values are set according to what the user enters from the UI.&lt;br /&gt;
#Change the error message on line 78:&lt;br /&gt;
#Consider lines 259-265, different methods are called with the same parameters (question, choice_key, q_choices) to create different types of questions, depending on q_type.&lt;br /&gt;
#Make appropriate changes to tests so that they pass after the above changes.&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Min and Max Question Score==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
Quizzes are created with a default minimum_ and maximum_question_score, which is the minimum or maximum score that can be obtained.  The values are automatically set by the controller as 0 and 1, respectively.  The final score of a quiz is calculated by multiplying the weight of the question by the score and summing the value of each question.&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Currently, minimum and maximum values cannot be set in a custom way per quiz.&lt;br /&gt;
===Solution===&lt;br /&gt;
Fields for minimum_question_score and maximum_question_score have been added to the form for creating each new quiz, and subsequently passed to the controller.  The value is set per quiz, not per question.  The values are not restricted.&lt;br /&gt;
&lt;br /&gt;
==Error Message on Line 78==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
The original error message stated &amp;quot;Your quiz has been taken by some other students, you cannot edit it anymore.&amp;quot;&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
This error message is vague and can be easily misunderstood.&lt;br /&gt;
===Solution===&lt;br /&gt;
The error message now states &amp;quot;Your quiz has been taken by one or more students; you cannot edit it anymore.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Creating Questions==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
When creating a quiz, there are currently three different methods for each of the three different question types:  true-false, radio, and multiple choice.  &lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Radio and multiple choice implement very similar, almost duplicated, functionality in both methods.&lt;br /&gt;
===Solution===&lt;br /&gt;
Radio and multiple choice questions have been combined into a single method so as to remove the duplicated functionality.  True-false questions will remain separate, because the functionality is significantly different.  Future adjustments would require too many changes in the code and tests.&lt;br /&gt;
&lt;br /&gt;
==Files Involved==&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/controllers/quiz_questionnaires_controller.rb app/controllers/quiz_questionnaires_controller.rb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/_quiz_questionnaire.html.erb app/views/questionnaires/_quiz_questionnaire.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/view.html.erb app/views/questionnaires/view.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/spec/controllers/quiz_questionnaires_controller_spec.rb spec/controllers/quiz_questionnaires_controller_spec.rb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/_new_question_template.html.erb app/views/questionnaires/_new_question_template.html.erb]&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
The image below shows the addition of the min_/max_question_score fields to the quiz questionnaire form. &lt;br /&gt;
[[File:Quiz_form2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image below shows the values from the form used in the controller.&lt;br /&gt;
[[File:Quiz_controller.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image below shows the creation of a single function to handle both types of multiple choice questions. It also shows that the new function is called in the save_choices function instead of calling both the original functions.&lt;br /&gt;
&lt;br /&gt;
[[File:Quiz_poly.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image below shows changes necessary for assigning the correct answer for radio button multiple choice questions after making the changes from the previous image (the changes to the functions in the controller).&lt;br /&gt;
[[File:quiz_partial.png]]&lt;br /&gt;
&lt;br /&gt;
=Testing our Solutions=&lt;br /&gt;
==Running Tests==&lt;br /&gt;
&amp;lt;code&amp;gt;rspec spec/controllers/quiz_questionnaires_controller.rb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Testing Server==&lt;br /&gt;
[http://152.7.98.81:8080 http://152.7.98.81:8080]&lt;br /&gt;
&lt;br /&gt;
The instructions below can also be found [https://docs.google.com/document/d/1Od7fB0jyCdP5VJuUOWAsbvhlP15hN-19WS0ZbPwRkEk/edit?usp=sharing here] with screenshots to follow along.&lt;br /&gt;
&lt;br /&gt;
===Creating New Assignment===&lt;br /&gt;
#On the main menu, click on “Manage.”  Be sure not to click on any of the sub-items.&lt;br /&gt;
#The following page will have a sub-menu for Courses, Assignments, and Questionnaires.  Click on &amp;quot;Assignment.&amp;quot;&lt;br /&gt;
#Click on the plus sign on the right side of the page to create a new assignment.&lt;br /&gt;
#Under the “General” tab, fill out the “Assignment Name,” select a course from the drop down menu, and check “has quiz.”&lt;br /&gt;
#Once you have filled out the form, click on the “Create” button at the bottom of the page to create the assignment.&lt;br /&gt;
#Once the assignment is created, a new field called “Number of Quiz questions” will be created on the right side of the page.  Its default value will be 0.  Change it to your desired number of questions. (Note:  There are three different kinds of questions:  Multiple Choice (four answer choices), Radio (four answer choices), and True-False.  In order to test all answer possibilities, you can set this value to 17.)&lt;br /&gt;
#Click on the Due Dates tab.&lt;br /&gt;
#Set the dates for submission and review to any date after the current date.&lt;br /&gt;
#Click &amp;quot;Save.&amp;quot;&lt;br /&gt;
#Under editing the assignment, click on the Other Stuff tab.&lt;br /&gt;
#Click on “Add Participant.”&lt;br /&gt;
#Add instructor6 as a participant.&lt;br /&gt;
&lt;br /&gt;
===Creating New Quiz===&lt;br /&gt;
#On the main menu, go to the &amp;quot;Assignments&amp;quot; tab.&lt;br /&gt;
#This page will look similar to the Expertiza home page for the class.  Find the assignment you just created under the “Tasks you have not yet started” list and click on it.&lt;br /&gt;
#On the assignment page, click on &amp;quot;Create New Quiz&amp;quot; under “Quiz.”&lt;br /&gt;
#Fill out the form.  Once you select the type for each question, the view will update to allow you to enter your answer options.  Remember these settings to check the correctness later.  (Note:  If you only provide one potential answer for the “Multiple Choice - Checkbox” option, the quiz will not be created and the entire form will clear.)&lt;br /&gt;
#Once you have filled out the entire form, click &amp;quot;Save&amp;quot; at the bottom of the page. &lt;br /&gt;
#You will be returned to the “Submit work for Assignment” page.  There will be new options under “Quiz.”  Click on “View Quiz.”&lt;br /&gt;
#This page should show all of the settings of your quiz, including correct answers, weight, and minimum and maximum question score.&lt;br /&gt;
&lt;br /&gt;
=Team Information=&lt;br /&gt;
: Colleen &amp;quot;Bria&amp;quot; Engen (ceengen)&lt;br /&gt;
: Justin Kirschner (jkirsch)&lt;br /&gt;
: Darby Madewell (demadewe)&lt;br /&gt;
: '''Mentor:''' Sanket Pai (sgpai)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
*[https://github.com/Justin-Kirschner/expertiza GitHub E2068 Repository Fork]&lt;br /&gt;
*[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
*[http://research.csc.ncsu.edu/efg/expertiza Expertiza project Details]&lt;br /&gt;
*[https://www.youtube.com/channel/UCdKXzox7hrWjfOMML6FzTWg Expertiza YouTube Channel]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134619</id>
		<title>CSC/ECE 517 Fall 2020 - E2068. Refactor quiz questionnaires controller.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134619"/>
		<updated>2020-10-11T22:30:48Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Code Modifications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
This project contributes to [https://github.com/expertiza/expertiza Expertiza], an open-source project using [http://rubyonrails.org/ Ruby on Rails]. Expertiza is a platform for student learning that encourages active and cooperative learning while discouraging plagiarism. &lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
'''quiz_questionnaires_controller.rb''' is used in Expertiza to handle all functionality related to quizzes. A quiz is a type of questionnaire that allows reviewees to interact with their reviewers and making sure they read the submissions before reviewing. The student creating a quiz is supposed to ask questions related to their work, which, ideally, a reviewer should be able to answer. (If a reviewer cannot answer the questions about the reviewed work, then we might doubt the quality of that reviewer’s review.)  This controller needs some changes as detailed below.&lt;br /&gt;
&lt;br /&gt;
==Issues==&lt;br /&gt;
# Change the way min_question_score and max_question_score are set for @questionnaire on lines 39-40, as well as on lines 53-54.&lt;br /&gt;
#*These statements set the min and max scores to 0 and 1, respectively, regardless of what the user enters, which is not intended.&lt;br /&gt;
#*Change it so that the values are set according to what the user enters from the UI.&lt;br /&gt;
#Change the error message on line 78:&lt;br /&gt;
#Consider lines 259-265, different methods are called with the same parameters (question, choice_key, q_choices) to create different types of questions, depending on q_type.&lt;br /&gt;
#Make appropriate changes to tests so that they pass after the above changes.&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Min and Max Question Score==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
Quizzes are created with a default minimum_ and maximum_question_score, which is the minimum or maximum score that can be obtained.  The values are automatically set by the controller as 0 and 1, respectively.  The final score of a quiz is calculated by multiplying the weight of the question by the score and summing the value of each question.&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Currently, minimum and maximum values cannot be set in a custom way per quiz.&lt;br /&gt;
===Solution===&lt;br /&gt;
Fields for minimum_question_score and maximum_question_score have been added to the form for creating each new quiz, and subsequently passed to the controller.  The value is set per quiz, not per question.  The values are not restricted.&lt;br /&gt;
&lt;br /&gt;
==Error Message on Line 78==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
The original error message stated &amp;quot;Your quiz has been taken by some other students, you cannot edit it anymore.&amp;quot;&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
This error message is vague and can be easily misunderstood.&lt;br /&gt;
===Solution===&lt;br /&gt;
The error message now states &amp;quot;Your quiz has been taken by one or more students; you cannot edit it anymore.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Creating Questions==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
When creating a quiz, there are currently three different methods for each of the three different question types:  true-false, radio, and multiple choice.  &lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Radio and multiple choice implement very similar, almost duplicated, functionality in both methods.&lt;br /&gt;
===Solution===&lt;br /&gt;
Radio and multiple choice questions have been combined into a single method so as to remove the duplicated functionality.  True-false questions will remain separate, because the functionality is significantly different.  Future adjustments would require too many changes in the code and tests.&lt;br /&gt;
&lt;br /&gt;
==Files Involved==&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/controllers/quiz_questionnaires_controller.rb app/controllers/quiz_questionnaires_controller.rb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/_quiz_questionnaire.html.erb app/views/questionnaires/_quiz_questionnaire.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/view.html.erb app/views/questionnaires/view.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/spec/controllers/quiz_questionnaires_controller_spec.rb spec/controllers/quiz_questionnaires_controller_spec.rb]&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
The image below shows the addition of the min_/max_question_score fields to the quiz questionnaire form. &lt;br /&gt;
[[File:Quiz_form2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image below shows the values from the form used in the controller.&lt;br /&gt;
[[File:Quiz_controller.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image below shows the creation of a single function to handle both types of multiple choice questions. It also shows that the new function is called in the save_choices function instead of calling both the original functions.&lt;br /&gt;
&lt;br /&gt;
[[File:Quiz_poly.png]]&lt;br /&gt;
&lt;br /&gt;
=Testing our Solutions=&lt;br /&gt;
==Running Tests==&lt;br /&gt;
&amp;lt;code&amp;gt;rspec spec/controllers/quiz_questionnaires_controller.rb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Testing Server==&lt;br /&gt;
[http://152.7.98.81:8080 http://152.7.98.81:8080]&lt;br /&gt;
&lt;br /&gt;
===Creating New Assignment===&lt;br /&gt;
#Click on &amp;quot;Manage.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Assignment.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;New Assignment.&amp;quot;&lt;br /&gt;
#Check &amp;quot;has quiz&amp;quot; under the &amp;quot;General&amp;quot; tab.&lt;br /&gt;
#Set number of quiz questions in &amp;quot;General&amp;quot; tab.  Note that this will not show up until after you save.&lt;br /&gt;
#Set Due Date to any date after the current date under the &amp;quot;Due Dates&amp;quot; tab.&lt;br /&gt;
#set instructor6 as a participant under the &amp;quot;Other Stuff&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Creating New Quiz===&lt;br /&gt;
#Go to the &amp;quot;Assignments&amp;quot; tab.&lt;br /&gt;
#Find the assignment you created using the steps above.&lt;br /&gt;
#Go to &amp;quot;Your Work.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Create New Quiz.&amp;quot;&lt;br /&gt;
#Fill out the form and click &amp;quot;Save.&amp;quot;  This page should show all of the settings of your quiz, including correct answers, weight, and minimum and maximum question score.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Team Information=&lt;br /&gt;
: Colleen &amp;quot;Bria&amp;quot; Engen (ceengen)&lt;br /&gt;
: Justin Kirschner (jkirsch)&lt;br /&gt;
: Darby Madewell (demadewe)&lt;br /&gt;
: '''Mentor:''' Sanket Pai (sgpai)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
*[https://github.com/Justin-Kirschner/expertiza GitHub E2068 Repository Fork]&lt;br /&gt;
*[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
*[http://research.csc.ncsu.edu/efg/expertiza Expertiza project Details]&lt;br /&gt;
*[https://www.youtube.com/channel/UCdKXzox7hrWjfOMML6FzTWg Expertiza YouTube Channel]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Quiz_poly.png&amp;diff=134618</id>
		<title>File:Quiz poly.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Quiz_poly.png&amp;diff=134618"/>
		<updated>2020-10-11T22:24:43Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134617</id>
		<title>CSC/ECE 517 Fall 2020 - E2068. Refactor quiz questionnaires controller.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134617"/>
		<updated>2020-10-11T22:24:30Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Code Modifications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
This project contributes to [https://github.com/expertiza/expertiza Expertiza], an open-source project using [http://rubyonrails.org/ Ruby on Rails]. Expertiza is a platform for student learning that encourages active and cooperative learning while discouraging plagiarism. &lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
'''quiz_questionnaires_controller.rb''' is used in Expertiza to handle all functionality related to quizzes. A quiz is a type of questionnaire that allows reviewees to interact with their reviewers and making sure they read the submissions before reviewing. The student creating a quiz is supposed to ask questions related to their work, which, ideally, a reviewer should be able to answer. (If a reviewer cannot answer the questions about the reviewed work, then we might doubt the quality of that reviewer’s review.)  This controller needs some changes as detailed below.&lt;br /&gt;
&lt;br /&gt;
==Issues==&lt;br /&gt;
# Change the way min_question_score and max_question_score are set for @questionnaire on lines 39-40, as well as on lines 53-54.&lt;br /&gt;
#*These statements set the min and max scores to 0 and 1, respectively, regardless of what the user enters, which is not intended.&lt;br /&gt;
#*Change it so that the values are set according to what the user enters from the UI.&lt;br /&gt;
#Change the error message on line 78:&lt;br /&gt;
#Consider lines 259-265, different methods are called with the same parameters (question, choice_key, q_choices) to create different types of questions, depending on q_type.&lt;br /&gt;
#Make appropriate changes to tests so that they pass after the above changes.&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Min and Max Question Score==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
Quizzes are created with a default minimum_ and maximum_question_score, which is the minimum or maximum score that can be obtained.  The values are automatically set by the controller as 0 and 1, respectively.  The final score of a quiz is calculated by multiplying the weight of the question by the score and summing the value of each question.&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Currently, minimum and maximum values cannot be set in a custom way per quiz.&lt;br /&gt;
===Solution===&lt;br /&gt;
Fields for minimum_question_score and maximum_question_score have been added to the form for creating each new quiz, and subsequently passed to the controller.  The value is set per quiz, not per question.  The values are not restricted.&lt;br /&gt;
&lt;br /&gt;
==Error Message on Line 78==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
The original error message stated &amp;quot;Your quiz has been taken by some other students, you cannot edit it anymore.&amp;quot;&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
This error message is vague and can be easily misunderstood.&lt;br /&gt;
===Solution===&lt;br /&gt;
The error message now states &amp;quot;Your quiz has been taken by one or more students; you cannot edit it anymore.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Creating Questions==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
When creating a quiz, there are currently three different methods for each of the three different question types:  true-false, radio, and multiple choice.  &lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Radio and multiple choice implement very similar, almost duplicated, functionality in both methods.&lt;br /&gt;
===Solution===&lt;br /&gt;
Radio and multiple choice questions have been combined into a single method so as to remove the duplicated functionality.  True-false questions will remain separate, because the functionality is significantly different.  Future adjustments would require too many changes in the code and tests.&lt;br /&gt;
&lt;br /&gt;
==Files Involved==&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/controllers/quiz_questionnaires_controller.rb app/controllers/quiz_questionnaires_controller.rb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/_quiz_questionnaire.html.erb app/views/questionnaires/_quiz_questionnaire.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/view.html.erb app/views/questionnaires/view.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/spec/controllers/quiz_questionnaires_controller_spec.rb spec/controllers/quiz_questionnaires_controller_spec.rb]&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
The image below shows the addition of the min_/max_question_score fields to the quiz questionnaire form. &lt;br /&gt;
[[File:Quiz_form2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image below shows the values from the form used in the controller.&lt;br /&gt;
[[File:Quiz_controller.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image below shows the creation of a single function to handle both types of multiple choice questions. It also shows that the new function is called in the function save_choices function instead of both the original functions.&lt;br /&gt;
&lt;br /&gt;
[[File:Quiz_poly.png]]&lt;br /&gt;
&lt;br /&gt;
=Testing our Solutions=&lt;br /&gt;
==Running Tests==&lt;br /&gt;
&amp;lt;code&amp;gt;rspec spec/controllers/quiz_questionnaires_controller.rb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Testing Server==&lt;br /&gt;
[http://152.7.98.81:8080 http://152.7.98.81:8080]&lt;br /&gt;
&lt;br /&gt;
===Creating New Assignment===&lt;br /&gt;
#Click on &amp;quot;Manage.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Assignment.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;New Assignment.&amp;quot;&lt;br /&gt;
#Check &amp;quot;has quiz&amp;quot; under the &amp;quot;General&amp;quot; tab.&lt;br /&gt;
#Set number of quiz questions in &amp;quot;General&amp;quot; tab.  Note that this will not show up until after you save.&lt;br /&gt;
#Set Due Date to any date after the current date under the &amp;quot;Due Dates&amp;quot; tab.&lt;br /&gt;
#set instructor6 as a participant under the &amp;quot;Other Stuff&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Creating New Quiz===&lt;br /&gt;
#Go to the &amp;quot;Assignments&amp;quot; tab.&lt;br /&gt;
#Find the assignment you created using the steps above.&lt;br /&gt;
#Go to &amp;quot;Your Work.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Create New Quiz.&amp;quot;&lt;br /&gt;
#Fill out the form and click &amp;quot;Save.&amp;quot;  This page should show all of the settings of your quiz, including correct answers, weight, and minimum and maximum question score.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Team Information=&lt;br /&gt;
: Colleen &amp;quot;Bria&amp;quot; Engen (ceengen)&lt;br /&gt;
: Justin Kirschner (jkirsch)&lt;br /&gt;
: Darby Madewell (demadewe)&lt;br /&gt;
: '''Mentor:''' Sanket Pai (sgpai)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
*[https://github.com/Justin-Kirschner/expertiza GitHub E2068 Repository Fork]&lt;br /&gt;
*[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
*[http://research.csc.ncsu.edu/efg/expertiza Expertiza project Details]&lt;br /&gt;
*[https://www.youtube.com/channel/UCdKXzox7hrWjfOMML6FzTWg Expertiza YouTube Channel]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134616</id>
		<title>CSC/ECE 517 Fall 2020 - E2068. Refactor quiz questionnaires controller.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134616"/>
		<updated>2020-10-11T22:20:15Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Running Tests */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
This project contributes to [https://github.com/expertiza/expertiza Expertiza], an open-source project using [http://rubyonrails.org/ Ruby on Rails]. Expertiza is a platform for student learning that encourages active and cooperative learning while discouraging plagiarism. &lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
'''quiz_questionnaires_controller.rb''' is used in Expertiza to handle all functionality related to quizzes. A quiz is a type of questionnaire that allows reviewees to interact with their reviewers and making sure they read the submissions before reviewing. The student creating a quiz is supposed to ask questions related to their work, which, ideally, a reviewer should be able to answer. (If a reviewer cannot answer the questions about the reviewed work, then we might doubt the quality of that reviewer’s review.)  This controller needs some changes as detailed below.&lt;br /&gt;
&lt;br /&gt;
==Issues==&lt;br /&gt;
# Change the way min_question_score and max_question_score are set for @questionnaire on lines 39-40, as well as on lines 53-54.&lt;br /&gt;
#*These statements set the min and max scores to 0 and 1, respectively, regardless of what the user enters, which is not intended.&lt;br /&gt;
#*Change it so that the values are set according to what the user enters from the UI.&lt;br /&gt;
#Change the error message on line 78:&lt;br /&gt;
#Consider lines 259-265, different methods are called with the same parameters (question, choice_key, q_choices) to create different types of questions, depending on q_type.&lt;br /&gt;
#Make appropriate changes to tests so that they pass after the above changes.&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Min and Max Question Score==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
Quizzes are created with a default minimum_ and maximum_question_score, which is the minimum or maximum score that can be obtained.  The values are automatically set by the controller as 0 and 1, respectively.  The final score of a quiz is calculated by multiplying the weight of the question by the score and summing the value of each question.&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Currently, minimum and maximum values cannot be set in a custom way per quiz.&lt;br /&gt;
===Solution===&lt;br /&gt;
Fields for minimum_question_score and maximum_question_score have been added to the form for creating each new quiz, and subsequently passed to the controller.  The value is set per quiz, not per question.  The values are not restricted.&lt;br /&gt;
&lt;br /&gt;
==Error Message on Line 78==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
The original error message stated &amp;quot;Your quiz has been taken by some other students, you cannot edit it anymore.&amp;quot;&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
This error message is vague and can be easily misunderstood.&lt;br /&gt;
===Solution===&lt;br /&gt;
The error message now states &amp;quot;Your quiz has been taken by one or more students; you cannot edit it anymore.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Creating Questions==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
When creating a quiz, there are currently three different methods for each of the three different question types:  true-false, radio, and multiple choice.  &lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Radio and multiple choice implement very similar, almost duplicated, functionality in both methods.&lt;br /&gt;
===Solution===&lt;br /&gt;
Radio and multiple choice questions have been combined into a single method so as to remove the duplicated functionality.  True-false questions will remain separate, because the functionality is significantly different.  Future adjustments would require too many changes in the code and tests.&lt;br /&gt;
&lt;br /&gt;
==Files Involved==&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/controllers/quiz_questionnaires_controller.rb app/controllers/quiz_questionnaires_controller.rb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/_quiz_questionnaire.html.erb app/views/questionnaires/_quiz_questionnaire.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/view.html.erb app/views/questionnaires/view.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/spec/controllers/quiz_questionnaires_controller_spec.rb spec/controllers/quiz_questionnaires_controller_spec.rb]&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
The image below shows the addition of the min_/max_question_score fields to the quiz questionnaire form. &lt;br /&gt;
[[File:Quiz_form2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image below shows the values from the form used in the controller.&lt;br /&gt;
[[File:Quiz_controller.png]]&lt;br /&gt;
&lt;br /&gt;
=Testing our Solutions=&lt;br /&gt;
==Running Tests==&lt;br /&gt;
&amp;lt;code&amp;gt;rspec spec/controllers/quiz_questionnaires_controller.rb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Testing Server==&lt;br /&gt;
[http://152.7.98.81:8080 http://152.7.98.81:8080]&lt;br /&gt;
&lt;br /&gt;
===Creating New Assignment===&lt;br /&gt;
#Click on &amp;quot;Manage.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Assignment.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;New Assignment.&amp;quot;&lt;br /&gt;
#Check &amp;quot;has quiz&amp;quot; under the &amp;quot;General&amp;quot; tab.&lt;br /&gt;
#Set number of quiz questions in &amp;quot;General&amp;quot; tab.  Note that this will not show up until after you save.&lt;br /&gt;
#Set Due Date to any date after the current date under the &amp;quot;Due Dates&amp;quot; tab.&lt;br /&gt;
#set instructor6 as a participant under the &amp;quot;Other Stuff&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Creating New Quiz===&lt;br /&gt;
#Go to the &amp;quot;Assignments&amp;quot; tab.&lt;br /&gt;
#Find the assignment you created using the steps above.&lt;br /&gt;
#Go to &amp;quot;Your Work.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Create New Quiz.&amp;quot;&lt;br /&gt;
#Fill out the form and click &amp;quot;Save.&amp;quot;  This page should show all of the settings of your quiz, including correct answers, weight, and minimum and maximum question score.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Team Information=&lt;br /&gt;
: Colleen &amp;quot;Bria&amp;quot; Engen (ceengen)&lt;br /&gt;
: Justin Kirschner (jkirsch)&lt;br /&gt;
: Darby Madewell (demadewe)&lt;br /&gt;
: '''Mentor:''' Sanket Pai (sgpai)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
*[https://github.com/Justin-Kirschner/expertiza GitHub E2068 Repository Fork]&lt;br /&gt;
*[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
*[http://research.csc.ncsu.edu/efg/expertiza Expertiza project Details]&lt;br /&gt;
*[https://www.youtube.com/channel/UCdKXzox7hrWjfOMML6FzTWg Expertiza YouTube Channel]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134615</id>
		<title>CSC/ECE 517 Fall 2020 - E2068. Refactor quiz questionnaires controller.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134615"/>
		<updated>2020-10-11T22:19:32Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Running Tests */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
This project contributes to [https://github.com/expertiza/expertiza Expertiza], an open-source project using [http://rubyonrails.org/ Ruby on Rails]. Expertiza is a platform for student learning that encourages active and cooperative learning while discouraging plagiarism. &lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
'''quiz_questionnaires_controller.rb''' is used in Expertiza to handle all functionality related to quizzes. A quiz is a type of questionnaire that allows reviewees to interact with their reviewers and making sure they read the submissions before reviewing. The student creating a quiz is supposed to ask questions related to their work, which, ideally, a reviewer should be able to answer. (If a reviewer cannot answer the questions about the reviewed work, then we might doubt the quality of that reviewer’s review.)  This controller needs some changes as detailed below.&lt;br /&gt;
&lt;br /&gt;
==Issues==&lt;br /&gt;
# Change the way min_question_score and max_question_score are set for @questionnaire on lines 39-40, as well as on lines 53-54.&lt;br /&gt;
#*These statements set the min and max scores to 0 and 1, respectively, regardless of what the user enters, which is not intended.&lt;br /&gt;
#*Change it so that the values are set according to what the user enters from the UI.&lt;br /&gt;
#Change the error message on line 78:&lt;br /&gt;
#Consider lines 259-265, different methods are called with the same parameters (question, choice_key, q_choices) to create different types of questions, depending on q_type.&lt;br /&gt;
#Make appropriate changes to tests so that they pass after the above changes.&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Min and Max Question Score==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
Quizzes are created with a default minimum_ and maximum_question_score, which is the minimum or maximum score that can be obtained.  The values are automatically set by the controller as 0 and 1, respectively.  The final score of a quiz is calculated by multiplying the weight of the question by the score and summing the value of each question.&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Currently, minimum and maximum values cannot be set in a custom way per quiz.&lt;br /&gt;
===Solution===&lt;br /&gt;
Fields for minimum_question_score and maximum_question_score have been added to the form for creating each new quiz, and subsequently passed to the controller.  The value is set per quiz, not per question.  The values are not restricted.&lt;br /&gt;
&lt;br /&gt;
==Error Message on Line 78==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
The original error message stated &amp;quot;Your quiz has been taken by some other students, you cannot edit it anymore.&amp;quot;&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
This error message is vague and can be easily misunderstood.&lt;br /&gt;
===Solution===&lt;br /&gt;
The error message now states &amp;quot;Your quiz has been taken by one or more students; you cannot edit it anymore.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Creating Questions==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
When creating a quiz, there are currently three different methods for each of the three different question types:  true-false, radio, and multiple choice.  &lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Radio and multiple choice implement very similar, almost duplicated, functionality in both methods.&lt;br /&gt;
===Solution===&lt;br /&gt;
Radio and multiple choice questions have been combined into a single method so as to remove the duplicated functionality.  True-false questions will remain separate, because the functionality is significantly different.  Future adjustments would require too many changes in the code and tests.&lt;br /&gt;
&lt;br /&gt;
==Files Involved==&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/controllers/quiz_questionnaires_controller.rb app/controllers/quiz_questionnaires_controller.rb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/_quiz_questionnaire.html.erb app/views/questionnaires/_quiz_questionnaire.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/view.html.erb app/views/questionnaires/view.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/spec/controllers/quiz_questionnaires_controller_spec.rb spec/controllers/quiz_questionnaires_controller_spec.rb]&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
The image below shows the addition of the min_/max_question_score fields to the quiz questionnaire form. &lt;br /&gt;
[[File:Quiz_form2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image below shows the values from the form used in the controller.&lt;br /&gt;
[[File:Quiz_controller.png]]&lt;br /&gt;
&lt;br /&gt;
=Testing our Solutions=&lt;br /&gt;
==Running Tests==&lt;br /&gt;
&amp;lt;code&amp;gt;rspec spec/controllers/quiz_questionnaires_controller.rb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Example9876578.jpg]]&lt;br /&gt;
&lt;br /&gt;
==Testing Server==&lt;br /&gt;
[http://152.7.98.81:8080 http://152.7.98.81:8080]&lt;br /&gt;
&lt;br /&gt;
===Creating New Assignment===&lt;br /&gt;
#Click on &amp;quot;Manage.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Assignment.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;New Assignment.&amp;quot;&lt;br /&gt;
#Check &amp;quot;has quiz&amp;quot; under the &amp;quot;General&amp;quot; tab.&lt;br /&gt;
#Set number of quiz questions in &amp;quot;General&amp;quot; tab.  Note that this will not show up until after you save.&lt;br /&gt;
#Set Due Date to any date after the current date under the &amp;quot;Due Dates&amp;quot; tab.&lt;br /&gt;
#set instructor6 as a participant under the &amp;quot;Other Stuff&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Creating New Quiz===&lt;br /&gt;
#Go to the &amp;quot;Assignments&amp;quot; tab.&lt;br /&gt;
#Find the assignment you created using the steps above.&lt;br /&gt;
#Go to &amp;quot;Your Work.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Create New Quiz.&amp;quot;&lt;br /&gt;
#Fill out the form and click &amp;quot;Save.&amp;quot;  This page should show all of the settings of your quiz, including correct answers, weight, and minimum and maximum question score.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Team Information=&lt;br /&gt;
: Colleen &amp;quot;Bria&amp;quot; Engen (ceengen)&lt;br /&gt;
: Justin Kirschner (jkirsch)&lt;br /&gt;
: Darby Madewell (demadewe)&lt;br /&gt;
: '''Mentor:''' Sanket Pai (sgpai)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
*[https://github.com/Justin-Kirschner/expertiza GitHub E2068 Repository Fork]&lt;br /&gt;
*[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
*[http://research.csc.ncsu.edu/efg/expertiza Expertiza project Details]&lt;br /&gt;
*[https://www.youtube.com/channel/UCdKXzox7hrWjfOMML6FzTWg Expertiza YouTube Channel]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134614</id>
		<title>CSC/ECE 517 Fall 2020 - E2068. Refactor quiz questionnaires controller.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134614"/>
		<updated>2020-10-11T22:19:15Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Running Tests */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
This project contributes to [https://github.com/expertiza/expertiza Expertiza], an open-source project using [http://rubyonrails.org/ Ruby on Rails]. Expertiza is a platform for student learning that encourages active and cooperative learning while discouraging plagiarism. &lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
'''quiz_questionnaires_controller.rb''' is used in Expertiza to handle all functionality related to quizzes. A quiz is a type of questionnaire that allows reviewees to interact with their reviewers and making sure they read the submissions before reviewing. The student creating a quiz is supposed to ask questions related to their work, which, ideally, a reviewer should be able to answer. (If a reviewer cannot answer the questions about the reviewed work, then we might doubt the quality of that reviewer’s review.)  This controller needs some changes as detailed below.&lt;br /&gt;
&lt;br /&gt;
==Issues==&lt;br /&gt;
# Change the way min_question_score and max_question_score are set for @questionnaire on lines 39-40, as well as on lines 53-54.&lt;br /&gt;
#*These statements set the min and max scores to 0 and 1, respectively, regardless of what the user enters, which is not intended.&lt;br /&gt;
#*Change it so that the values are set according to what the user enters from the UI.&lt;br /&gt;
#Change the error message on line 78:&lt;br /&gt;
#Consider lines 259-265, different methods are called with the same parameters (question, choice_key, q_choices) to create different types of questions, depending on q_type.&lt;br /&gt;
#Make appropriate changes to tests so that they pass after the above changes.&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Min and Max Question Score==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
Quizzes are created with a default minimum_ and maximum_question_score, which is the minimum or maximum score that can be obtained.  The values are automatically set by the controller as 0 and 1, respectively.  The final score of a quiz is calculated by multiplying the weight of the question by the score and summing the value of each question.&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Currently, minimum and maximum values cannot be set in a custom way per quiz.&lt;br /&gt;
===Solution===&lt;br /&gt;
Fields for minimum_question_score and maximum_question_score have been added to the form for creating each new quiz, and subsequently passed to the controller.  The value is set per quiz, not per question.  The values are not restricted.&lt;br /&gt;
&lt;br /&gt;
==Error Message on Line 78==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
The original error message stated &amp;quot;Your quiz has been taken by some other students, you cannot edit it anymore.&amp;quot;&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
This error message is vague and can be easily misunderstood.&lt;br /&gt;
===Solution===&lt;br /&gt;
The error message now states &amp;quot;Your quiz has been taken by one or more students; you cannot edit it anymore.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Creating Questions==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
When creating a quiz, there are currently three different methods for each of the three different question types:  true-false, radio, and multiple choice.  &lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Radio and multiple choice implement very similar, almost duplicated, functionality in both methods.&lt;br /&gt;
===Solution===&lt;br /&gt;
Radio and multiple choice questions have been combined into a single method so as to remove the duplicated functionality.  True-false questions will remain separate, because the functionality is significantly different.  Future adjustments would require too many changes in the code and tests.&lt;br /&gt;
&lt;br /&gt;
==Files Involved==&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/controllers/quiz_questionnaires_controller.rb app/controllers/quiz_questionnaires_controller.rb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/_quiz_questionnaire.html.erb app/views/questionnaires/_quiz_questionnaire.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/view.html.erb app/views/questionnaires/view.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/spec/controllers/quiz_questionnaires_controller_spec.rb spec/controllers/quiz_questionnaires_controller_spec.rb]&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
The image below shows the addition of the min_/max_question_score fields to the quiz questionnaire form. &lt;br /&gt;
[[File:Quiz_form2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image below shows the values from the form used in the controller.&lt;br /&gt;
[[File:Quiz_controller.png]]&lt;br /&gt;
&lt;br /&gt;
=Testing our Solutions=&lt;br /&gt;
==Running Tests==&lt;br /&gt;
&amp;lt;code&amp;gt;rspec spec/controllers/quiz_questionnaires_controller.rb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Example.jpg]]&lt;br /&gt;
&lt;br /&gt;
==Testing Server==&lt;br /&gt;
[http://152.7.98.81:8080 http://152.7.98.81:8080]&lt;br /&gt;
&lt;br /&gt;
===Creating New Assignment===&lt;br /&gt;
#Click on &amp;quot;Manage.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Assignment.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;New Assignment.&amp;quot;&lt;br /&gt;
#Check &amp;quot;has quiz&amp;quot; under the &amp;quot;General&amp;quot; tab.&lt;br /&gt;
#Set number of quiz questions in &amp;quot;General&amp;quot; tab.  Note that this will not show up until after you save.&lt;br /&gt;
#Set Due Date to any date after the current date under the &amp;quot;Due Dates&amp;quot; tab.&lt;br /&gt;
#set instructor6 as a participant under the &amp;quot;Other Stuff&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Creating New Quiz===&lt;br /&gt;
#Go to the &amp;quot;Assignments&amp;quot; tab.&lt;br /&gt;
#Find the assignment you created using the steps above.&lt;br /&gt;
#Go to &amp;quot;Your Work.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Create New Quiz.&amp;quot;&lt;br /&gt;
#Fill out the form and click &amp;quot;Save.&amp;quot;  This page should show all of the settings of your quiz, including correct answers, weight, and minimum and maximum question score.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Team Information=&lt;br /&gt;
: Colleen &amp;quot;Bria&amp;quot; Engen (ceengen)&lt;br /&gt;
: Justin Kirschner (jkirsch)&lt;br /&gt;
: Darby Madewell (demadewe)&lt;br /&gt;
: '''Mentor:''' Sanket Pai (sgpai)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
*[https://github.com/Justin-Kirschner/expertiza GitHub E2068 Repository Fork]&lt;br /&gt;
*[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
*[http://research.csc.ncsu.edu/efg/expertiza Expertiza project Details]&lt;br /&gt;
*[https://www.youtube.com/channel/UCdKXzox7hrWjfOMML6FzTWg Expertiza YouTube Channel]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134613</id>
		<title>CSC/ECE 517 Fall 2020 - E2068. Refactor quiz questionnaires controller.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134613"/>
		<updated>2020-10-11T22:18:55Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Code Modifications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
This project contributes to [https://github.com/expertiza/expertiza Expertiza], an open-source project using [http://rubyonrails.org/ Ruby on Rails]. Expertiza is a platform for student learning that encourages active and cooperative learning while discouraging plagiarism. &lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
'''quiz_questionnaires_controller.rb''' is used in Expertiza to handle all functionality related to quizzes. A quiz is a type of questionnaire that allows reviewees to interact with their reviewers and making sure they read the submissions before reviewing. The student creating a quiz is supposed to ask questions related to their work, which, ideally, a reviewer should be able to answer. (If a reviewer cannot answer the questions about the reviewed work, then we might doubt the quality of that reviewer’s review.)  This controller needs some changes as detailed below.&lt;br /&gt;
&lt;br /&gt;
==Issues==&lt;br /&gt;
# Change the way min_question_score and max_question_score are set for @questionnaire on lines 39-40, as well as on lines 53-54.&lt;br /&gt;
#*These statements set the min and max scores to 0 and 1, respectively, regardless of what the user enters, which is not intended.&lt;br /&gt;
#*Change it so that the values are set according to what the user enters from the UI.&lt;br /&gt;
#Change the error message on line 78:&lt;br /&gt;
#Consider lines 259-265, different methods are called with the same parameters (question, choice_key, q_choices) to create different types of questions, depending on q_type.&lt;br /&gt;
#Make appropriate changes to tests so that they pass after the above changes.&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Min and Max Question Score==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
Quizzes are created with a default minimum_ and maximum_question_score, which is the minimum or maximum score that can be obtained.  The values are automatically set by the controller as 0 and 1, respectively.  The final score of a quiz is calculated by multiplying the weight of the question by the score and summing the value of each question.&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Currently, minimum and maximum values cannot be set in a custom way per quiz.&lt;br /&gt;
===Solution===&lt;br /&gt;
Fields for minimum_question_score and maximum_question_score have been added to the form for creating each new quiz, and subsequently passed to the controller.  The value is set per quiz, not per question.  The values are not restricted.&lt;br /&gt;
&lt;br /&gt;
==Error Message on Line 78==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
The original error message stated &amp;quot;Your quiz has been taken by some other students, you cannot edit it anymore.&amp;quot;&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
This error message is vague and can be easily misunderstood.&lt;br /&gt;
===Solution===&lt;br /&gt;
The error message now states &amp;quot;Your quiz has been taken by one or more students; you cannot edit it anymore.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Creating Questions==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
When creating a quiz, there are currently three different methods for each of the three different question types:  true-false, radio, and multiple choice.  &lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Radio and multiple choice implement very similar, almost duplicated, functionality in both methods.&lt;br /&gt;
===Solution===&lt;br /&gt;
Radio and multiple choice questions have been combined into a single method so as to remove the duplicated functionality.  True-false questions will remain separate, because the functionality is significantly different.  Future adjustments would require too many changes in the code and tests.&lt;br /&gt;
&lt;br /&gt;
==Files Involved==&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/controllers/quiz_questionnaires_controller.rb app/controllers/quiz_questionnaires_controller.rb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/_quiz_questionnaire.html.erb app/views/questionnaires/_quiz_questionnaire.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/view.html.erb app/views/questionnaires/view.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/spec/controllers/quiz_questionnaires_controller_spec.rb spec/controllers/quiz_questionnaires_controller_spec.rb]&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
The image below shows the addition of the min_/max_question_score fields to the quiz questionnaire form. &lt;br /&gt;
[[File:Quiz_form2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image below shows the values from the form used in the controller.&lt;br /&gt;
[[File:Quiz_controller.png]]&lt;br /&gt;
&lt;br /&gt;
=Testing our Solutions=&lt;br /&gt;
==Running Tests==&lt;br /&gt;
&amp;lt;code&amp;gt;rspec spec/controllers/quiz_questionnaires_controller.rb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Testing Server==&lt;br /&gt;
[http://152.7.98.81:8080 http://152.7.98.81:8080]&lt;br /&gt;
&lt;br /&gt;
===Creating New Assignment===&lt;br /&gt;
#Click on &amp;quot;Manage.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Assignment.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;New Assignment.&amp;quot;&lt;br /&gt;
#Check &amp;quot;has quiz&amp;quot; under the &amp;quot;General&amp;quot; tab.&lt;br /&gt;
#Set number of quiz questions in &amp;quot;General&amp;quot; tab.  Note that this will not show up until after you save.&lt;br /&gt;
#Set Due Date to any date after the current date under the &amp;quot;Due Dates&amp;quot; tab.&lt;br /&gt;
#set instructor6 as a participant under the &amp;quot;Other Stuff&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Creating New Quiz===&lt;br /&gt;
#Go to the &amp;quot;Assignments&amp;quot; tab.&lt;br /&gt;
#Find the assignment you created using the steps above.&lt;br /&gt;
#Go to &amp;quot;Your Work.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Create New Quiz.&amp;quot;&lt;br /&gt;
#Fill out the form and click &amp;quot;Save.&amp;quot;  This page should show all of the settings of your quiz, including correct answers, weight, and minimum and maximum question score.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Team Information=&lt;br /&gt;
: Colleen &amp;quot;Bria&amp;quot; Engen (ceengen)&lt;br /&gt;
: Justin Kirschner (jkirsch)&lt;br /&gt;
: Darby Madewell (demadewe)&lt;br /&gt;
: '''Mentor:''' Sanket Pai (sgpai)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
*[https://github.com/Justin-Kirschner/expertiza GitHub E2068 Repository Fork]&lt;br /&gt;
*[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
*[http://research.csc.ncsu.edu/efg/expertiza Expertiza project Details]&lt;br /&gt;
*[https://www.youtube.com/channel/UCdKXzox7hrWjfOMML6FzTWg Expertiza YouTube Channel]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134612</id>
		<title>CSC/ECE 517 Fall 2020 - E2068. Refactor quiz questionnaires controller.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134612"/>
		<updated>2020-10-11T22:18:46Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Solution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
This project contributes to [https://github.com/expertiza/expertiza Expertiza], an open-source project using [http://rubyonrails.org/ Ruby on Rails]. Expertiza is a platform for student learning that encourages active and cooperative learning while discouraging plagiarism. &lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
'''quiz_questionnaires_controller.rb''' is used in Expertiza to handle all functionality related to quizzes. A quiz is a type of questionnaire that allows reviewees to interact with their reviewers and making sure they read the submissions before reviewing. The student creating a quiz is supposed to ask questions related to their work, which, ideally, a reviewer should be able to answer. (If a reviewer cannot answer the questions about the reviewed work, then we might doubt the quality of that reviewer’s review.)  This controller needs some changes as detailed below.&lt;br /&gt;
&lt;br /&gt;
==Issues==&lt;br /&gt;
# Change the way min_question_score and max_question_score are set for @questionnaire on lines 39-40, as well as on lines 53-54.&lt;br /&gt;
#*These statements set the min and max scores to 0 and 1, respectively, regardless of what the user enters, which is not intended.&lt;br /&gt;
#*Change it so that the values are set according to what the user enters from the UI.&lt;br /&gt;
#Change the error message on line 78:&lt;br /&gt;
#Consider lines 259-265, different methods are called with the same parameters (question, choice_key, q_choices) to create different types of questions, depending on q_type.&lt;br /&gt;
#Make appropriate changes to tests so that they pass after the above changes.&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Min and Max Question Score==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
Quizzes are created with a default minimum_ and maximum_question_score, which is the minimum or maximum score that can be obtained.  The values are automatically set by the controller as 0 and 1, respectively.  The final score of a quiz is calculated by multiplying the weight of the question by the score and summing the value of each question.&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Currently, minimum and maximum values cannot be set in a custom way per quiz.&lt;br /&gt;
===Solution===&lt;br /&gt;
Fields for minimum_question_score and maximum_question_score have been added to the form for creating each new quiz, and subsequently passed to the controller.  The value is set per quiz, not per question.  The values are not restricted.&lt;br /&gt;
&lt;br /&gt;
==Error Message on Line 78==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
The original error message stated &amp;quot;Your quiz has been taken by some other students, you cannot edit it anymore.&amp;quot;&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
This error message is vague and can be easily misunderstood.&lt;br /&gt;
===Solution===&lt;br /&gt;
The error message now states &amp;quot;Your quiz has been taken by one or more students; you cannot edit it anymore.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Creating Questions==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
When creating a quiz, there are currently three different methods for each of the three different question types:  true-false, radio, and multiple choice.  &lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Radio and multiple choice implement very similar, almost duplicated, functionality in both methods.&lt;br /&gt;
===Solution===&lt;br /&gt;
Radio and multiple choice questions have been combined into a single method so as to remove the duplicated functionality.  True-false questions will remain separate, because the functionality is significantly different.  Future adjustments would require too many changes in the code and tests.&lt;br /&gt;
&lt;br /&gt;
==Files Involved==&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/controllers/quiz_questionnaires_controller.rb app/controllers/quiz_questionnaires_controller.rb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/_quiz_questionnaire.html.erb app/views/questionnaires/_quiz_questionnaire.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/view.html.erb app/views/questionnaires/view.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/spec/controllers/quiz_questionnaires_controller_spec.rb spec/controllers/quiz_questionnaires_controller_spec.rb]&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
&amp;lt;insert git commit images&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Testing our Solutions=&lt;br /&gt;
==Running Tests==&lt;br /&gt;
&amp;lt;code&amp;gt;rspec spec/controllers/quiz_questionnaires_controller.rb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Testing Server==&lt;br /&gt;
[http://152.7.98.81:8080 http://152.7.98.81:8080]&lt;br /&gt;
&lt;br /&gt;
===Creating New Assignment===&lt;br /&gt;
#Click on &amp;quot;Manage.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Assignment.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;New Assignment.&amp;quot;&lt;br /&gt;
#Check &amp;quot;has quiz&amp;quot; under the &amp;quot;General&amp;quot; tab.&lt;br /&gt;
#Set number of quiz questions in &amp;quot;General&amp;quot; tab.  Note that this will not show up until after you save.&lt;br /&gt;
#Set Due Date to any date after the current date under the &amp;quot;Due Dates&amp;quot; tab.&lt;br /&gt;
#set instructor6 as a participant under the &amp;quot;Other Stuff&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Creating New Quiz===&lt;br /&gt;
#Go to the &amp;quot;Assignments&amp;quot; tab.&lt;br /&gt;
#Find the assignment you created using the steps above.&lt;br /&gt;
#Go to &amp;quot;Your Work.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Create New Quiz.&amp;quot;&lt;br /&gt;
#Fill out the form and click &amp;quot;Save.&amp;quot;  This page should show all of the settings of your quiz, including correct answers, weight, and minimum and maximum question score.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Team Information=&lt;br /&gt;
: Colleen &amp;quot;Bria&amp;quot; Engen (ceengen)&lt;br /&gt;
: Justin Kirschner (jkirsch)&lt;br /&gt;
: Darby Madewell (demadewe)&lt;br /&gt;
: '''Mentor:''' Sanket Pai (sgpai)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
*[https://github.com/Justin-Kirschner/expertiza GitHub E2068 Repository Fork]&lt;br /&gt;
*[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
*[http://research.csc.ncsu.edu/efg/expertiza Expertiza project Details]&lt;br /&gt;
*[https://www.youtube.com/channel/UCdKXzox7hrWjfOMML6FzTWg Expertiza YouTube Channel]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134611</id>
		<title>CSC/ECE 517 Fall 2020 - E2068. Refactor quiz questionnaires controller.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134611"/>
		<updated>2020-10-11T22:10:14Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Solution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
This project contributes to [https://github.com/expertiza/expertiza Expertiza], an open-source project using [http://rubyonrails.org/ Ruby on Rails]. Expertiza is a platform for student learning that encourages active and cooperative learning while discouraging plagiarism. &lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
'''quiz_questionnaires_controller.rb''' is used in Expertiza to handle all functionality related to quizzes. A quiz is a type of questionnaire that allows reviewees to interact with their reviewers and making sure they read the submissions before reviewing. The student creating a quiz is supposed to ask questions related to their work, which, ideally, a reviewer should be able to answer. (If a reviewer cannot answer the questions about the reviewed work, then we might doubt the quality of that reviewer’s review.)  This controller needs some changes as detailed below.&lt;br /&gt;
&lt;br /&gt;
==Issues==&lt;br /&gt;
# Change the way min_question_score and max_question_score are set for @questionnaire on lines 39-40, as well as on lines 53-54.&lt;br /&gt;
#*These statements set the min and max scores to 0 and 1, respectively, regardless of what the user enters, which is not intended.&lt;br /&gt;
#*Change it so that the values are set according to what the user enters from the UI.&lt;br /&gt;
#Change the error message on line 78:&lt;br /&gt;
#Consider lines 259-265, different methods are called with the same parameters (question, choice_key, q_choices) to create different types of questions, depending on q_type.&lt;br /&gt;
#Make appropriate changes to tests so that they pass after the above changes.&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Min and Max Question Score==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
Quizzes are created with a default minimum_ and maximum_question_score, which is the minimum or maximum score that can be obtained.  The values are automatically set by the controller as 0 and 1, respectively.  The final score of a quiz is calculated by multiplying the weight of the question by the score and summing the value of each question.&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Currently, minimum and maximum values cannot be set in a custom way per quiz.&lt;br /&gt;
===Solution===&lt;br /&gt;
Fields for minimum_question_score and maximum_question_score have been added to the form for creating each new quiz, and subsequently passed to the controller.  The value is set per quiz, not per question.  The values are not restricted.&lt;br /&gt;
&lt;br /&gt;
The image below shows the addition of the min_/max_question_score fields to the quiz questionnaire form. &lt;br /&gt;
[[File:Quiz_form2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image below shows the values from the form used in the controller.&lt;br /&gt;
[[File:Quiz_controller.png]]&lt;br /&gt;
&lt;br /&gt;
==Error Message on Line 78==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
The original error message stated &amp;quot;Your quiz has been taken by some other students, you cannot edit it anymore.&amp;quot;&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
This error message is vague and can be easily misunderstood.&lt;br /&gt;
===Solution===&lt;br /&gt;
The error message now states &amp;quot;Your quiz has been taken by one or more students; you cannot edit it anymore.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Creating Questions==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
When creating a quiz, there are currently three different methods for each of the three different question types:  true-false, radio, and multiple choice.  &lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Radio and multiple choice implement very similar, almost duplicated, functionality in both methods.&lt;br /&gt;
===Solution===&lt;br /&gt;
Radio and multiple choice questions have been combined into a single method so as to remove the duplicated functionality.  True-false questions will remain separate, because the functionality is significantly different.  Future adjustments would require too many changes in the code and tests.&lt;br /&gt;
&lt;br /&gt;
==Files Involved==&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/controllers/quiz_questionnaires_controller.rb app/controllers/quiz_questionnaires_controller.rb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/_quiz_questionnaire.html.erb app/views/questionnaires/_quiz_questionnaire.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/view.html.erb app/views/questionnaires/view.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/spec/controllers/quiz_questionnaires_controller_spec.rb spec/controllers/quiz_questionnaires_controller_spec.rb]&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
&amp;lt;insert git commit images&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Testing our Solutions=&lt;br /&gt;
==Running Tests==&lt;br /&gt;
&amp;lt;code&amp;gt;rspec spec/controllers/quiz_questionnaires_controller.rb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Testing Server==&lt;br /&gt;
[http://152.7.98.81:8080 http://152.7.98.81:8080]&lt;br /&gt;
&lt;br /&gt;
===Creating New Assignment===&lt;br /&gt;
#Click on &amp;quot;Manage.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Assignment.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;New Assignment.&amp;quot;&lt;br /&gt;
#Check &amp;quot;has quiz&amp;quot; under the &amp;quot;General&amp;quot; tab.&lt;br /&gt;
#Set number of quiz questions in &amp;quot;General&amp;quot; tab.  Note that this will not show up until after you save.&lt;br /&gt;
#Set Due Date to any date after the current date under the &amp;quot;Due Dates&amp;quot; tab.&lt;br /&gt;
#set instructor6 as a participant under the &amp;quot;Other Stuff&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Creating New Quiz===&lt;br /&gt;
#Go to the &amp;quot;Assignments&amp;quot; tab.&lt;br /&gt;
#Find the assignment you created using the steps above.&lt;br /&gt;
#Go to &amp;quot;Your Work.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Create New Quiz.&amp;quot;&lt;br /&gt;
#Fill out the form and click &amp;quot;Save.&amp;quot;  This page should show all of the settings of your quiz, including correct answers, weight, and minimum and maximum question score.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Team Information=&lt;br /&gt;
: Colleen &amp;quot;Bria&amp;quot; Engen (ceengen)&lt;br /&gt;
: Justin Kirschner (jkirsch)&lt;br /&gt;
: Darby Madewell (demadewe)&lt;br /&gt;
: '''Mentor:''' Sanket Pai (sgpai)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
*[https://github.com/Justin-Kirschner/expertiza GitHub E2068 Repository Fork]&lt;br /&gt;
*[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
*[http://research.csc.ncsu.edu/efg/expertiza Expertiza project Details]&lt;br /&gt;
*[https://www.youtube.com/channel/UCdKXzox7hrWjfOMML6FzTWg Expertiza YouTube Channel]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Quiz_controller.png&amp;diff=134610</id>
		<title>File:Quiz controller.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Quiz_controller.png&amp;diff=134610"/>
		<updated>2020-10-11T22:09:39Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134609</id>
		<title>CSC/ECE 517 Fall 2020 - E2068. Refactor quiz questionnaires controller.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2020_-_E2068._Refactor_quiz_questionnaires_controller.rb&amp;diff=134609"/>
		<updated>2020-10-11T22:08:18Z</updated>

		<summary type="html">&lt;p&gt;Jkirsch: /* Solution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
This project contributes to [https://github.com/expertiza/expertiza Expertiza], an open-source project using [http://rubyonrails.org/ Ruby on Rails]. Expertiza is a platform for student learning that encourages active and cooperative learning while discouraging plagiarism. &lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
'''quiz_questionnaires_controller.rb''' is used in Expertiza to handle all functionality related to quizzes. A quiz is a type of questionnaire that allows reviewees to interact with their reviewers and making sure they read the submissions before reviewing. The student creating a quiz is supposed to ask questions related to their work, which, ideally, a reviewer should be able to answer. (If a reviewer cannot answer the questions about the reviewed work, then we might doubt the quality of that reviewer’s review.)  This controller needs some changes as detailed below.&lt;br /&gt;
&lt;br /&gt;
==Issues==&lt;br /&gt;
# Change the way min_question_score and max_question_score are set for @questionnaire on lines 39-40, as well as on lines 53-54.&lt;br /&gt;
#*These statements set the min and max scores to 0 and 1, respectively, regardless of what the user enters, which is not intended.&lt;br /&gt;
#*Change it so that the values are set according to what the user enters from the UI.&lt;br /&gt;
#Change the error message on line 78:&lt;br /&gt;
#Consider lines 259-265, different methods are called with the same parameters (question, choice_key, q_choices) to create different types of questions, depending on q_type.&lt;br /&gt;
#Make appropriate changes to tests so that they pass after the above changes.&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Min and Max Question Score==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
Quizzes are created with a default minimum_ and maximum_question_score, which is the minimum or maximum score that can be obtained.  The values are automatically set by the controller as 0 and 1, respectively.  The final score of a quiz is calculated by multiplying the weight of the question by the score and summing the value of each question.&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Currently, minimum and maximum values cannot be set in a custom way per quiz.&lt;br /&gt;
===Solution===&lt;br /&gt;
Fields for minimum_question_score and maximum_question_score have been added to the form for creating each new quiz, and subsequently passed to the controller.  The value is set per quiz, not per question.  The values are not restricted.&lt;br /&gt;
&lt;br /&gt;
The image below shows the addition of the min_/max_question_score fields to the quiz questionnaire form. &lt;br /&gt;
[[File:Quiz_form2.png]]&lt;br /&gt;
&lt;br /&gt;
The image below shows the values from the form used in the controller.&lt;br /&gt;
[[File:Quiz_controller.png]]&lt;br /&gt;
&lt;br /&gt;
==Error Message on Line 78==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
The original error message stated &amp;quot;Your quiz has been taken by some other students, you cannot edit it anymore.&amp;quot;&lt;br /&gt;
===Problem(s)===&lt;br /&gt;
This error message is vague and can be easily misunderstood.&lt;br /&gt;
===Solution===&lt;br /&gt;
The error message now states &amp;quot;Your quiz has been taken by one or more students; you cannot edit it anymore.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Creating Questions==&lt;br /&gt;
===Current Implementation===&lt;br /&gt;
When creating a quiz, there are currently three different methods for each of the three different question types:  true-false, radio, and multiple choice.  &lt;br /&gt;
===Problem(s)===&lt;br /&gt;
Radio and multiple choice implement very similar, almost duplicated, functionality in both methods.&lt;br /&gt;
===Solution===&lt;br /&gt;
Radio and multiple choice questions have been combined into a single method so as to remove the duplicated functionality.  True-false questions will remain separate, because the functionality is significantly different.  Future adjustments would require too many changes in the code and tests.&lt;br /&gt;
&lt;br /&gt;
==Files Involved==&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/controllers/quiz_questionnaires_controller.rb app/controllers/quiz_questionnaires_controller.rb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/_quiz_questionnaire.html.erb app/views/questionnaires/_quiz_questionnaire.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/app/views/questionnaires/view.html.erb app/views/questionnaires/view.html.erb]&lt;br /&gt;
: [https://github.com/Justin-Kirschner/expertiza/blob/beta/spec/controllers/quiz_questionnaires_controller_spec.rb spec/controllers/quiz_questionnaires_controller_spec.rb]&lt;br /&gt;
&lt;br /&gt;
==Code Modifications==&lt;br /&gt;
&amp;lt;insert git commit images&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Testing our Solutions=&lt;br /&gt;
==Running Tests==&lt;br /&gt;
&amp;lt;code&amp;gt;rspec spec/controllers/quiz_questionnaires_controller.rb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Testing Server==&lt;br /&gt;
[http://152.7.98.81:8080 http://152.7.98.81:8080]&lt;br /&gt;
&lt;br /&gt;
===Creating New Assignment===&lt;br /&gt;
#Click on &amp;quot;Manage.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Assignment.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;New Assignment.&amp;quot;&lt;br /&gt;
#Check &amp;quot;has quiz&amp;quot; under the &amp;quot;General&amp;quot; tab.&lt;br /&gt;
#Set number of quiz questions in &amp;quot;General&amp;quot; tab.  Note that this will not show up until after you save.&lt;br /&gt;
#Set Due Date to any date after the current date under the &amp;quot;Due Dates&amp;quot; tab.&lt;br /&gt;
#set instructor6 as a participant under the &amp;quot;Other Stuff&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Creating New Quiz===&lt;br /&gt;
#Go to the &amp;quot;Assignments&amp;quot; tab.&lt;br /&gt;
#Find the assignment you created using the steps above.&lt;br /&gt;
#Go to &amp;quot;Your Work.&amp;quot;&lt;br /&gt;
#Click on &amp;quot;Create New Quiz.&amp;quot;&lt;br /&gt;
#Fill out the form and click &amp;quot;Save.&amp;quot;  This page should show all of the settings of your quiz, including correct answers, weight, and minimum and maximum question score.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Team Information=&lt;br /&gt;
: Colleen &amp;quot;Bria&amp;quot; Engen (ceengen)&lt;br /&gt;
: Justin Kirschner (jkirsch)&lt;br /&gt;
: Darby Madewell (demadewe)&lt;br /&gt;
: '''Mentor:''' Sanket Pai (sgpai)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
*[https://github.com/Justin-Kirschner/expertiza GitHub E2068 Repository Fork]&lt;br /&gt;
*[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
*[http://research.csc.ncsu.edu/efg/expertiza Expertiza project Details]&lt;br /&gt;
*[https://www.youtube.com/channel/UCdKXzox7hrWjfOMML6FzTWg Expertiza YouTube Channel]&lt;/div&gt;</summary>
		<author><name>Jkirsch</name></author>
	</entry>
</feed>