<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jcui23</id>
	<title>Expertiza_Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jcui23"/>
	<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=Special:Contributions/Jcui23"/>
	<updated>2026-05-16T14:18:51Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142415</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142415"/>
		<updated>2021-11-30T09:52:06Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Future Tasks */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== aes_encrypt &amp;amp; aes_decrypt ==== &lt;br /&gt;
&lt;br /&gt;
These two methods are counterparts of each other.  Instead of testing them separately, we test both in the '''same''' rspect context. We generate a random string mixed with numbers and then encrypt it with the method &amp;lt;code&amp;gt;aes_encrypt&amp;lt;/code&amp;gt; and receive the tuple &amp;lt;code&amp;gt;[cipher, key, iv]&amp;lt;/code&amp;gt;. Then the test invokes the &amp;lt;code&amp;gt;aes_decrypt&amp;lt;/code&amp;gt; method with the tuples to retrieve the plain text. Finally, the test checks whether the decrypted text is the same as the original random data. The test covers both the &amp;lt;code&amp;gt;aes_encrypt&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;aes_decrpyt&amp;lt;/code&amp;gt; methods in the reputation web service controller.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'test aes_decrypt' do&lt;br /&gt;
  it 'return the correct plain text' do&lt;br /&gt;
    data = (0...8).map { (65 + rand(26)).chr }.join&lt;br /&gt;
    cipher, key, iv = ReputationWebServiceController.new.aes_encrypt(data)&lt;br /&gt;
    plain = ReputationWebServiceController.new.aes_decrypt(cipher, key, iv)&lt;br /&gt;
    expect(plain).to eq(data)&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
All of current examples passed the test. There are '''5''' examples in the &amp;lt;code&amp;gt;reputation_web_service_controller_spec.rb&amp;lt;/code&amp;gt; file and there is no failure.&lt;br /&gt;
&lt;br /&gt;
[[File:E2168-test-passed.png|600px]]&lt;br /&gt;
&lt;br /&gt;
There are total '''10''' method in the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; controller:&lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# db_query_with_quiz_score&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# rsa_public_key1&lt;br /&gt;
# rsa_private_key2&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
Our test covers '''7''' of them: &lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rsa_public_key1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;rsa_public_key2&amp;lt;/code&amp;gt; methods requires the public key file 	&lt;br /&gt;
'''''public1.pem''''' and the private key file '''''private1.pem'''''. However, these two files are missing and therefore we could not test the related two methods.&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
The test coverage boosts from '''0%''' to '''50.31%'''.&lt;br /&gt;
&lt;br /&gt;
[[File:Test-coverage.png|1080px]]&lt;br /&gt;
&lt;br /&gt;
There is no testing implemented for the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; prior to our work and therefore the previous testing coverage is '''0%'''.&lt;br /&gt;
&lt;br /&gt;
From the coverage report generated by the &amp;lt;code&amp;gt;Simplecov&amp;lt;/code&amp;gt; gem, there are '''80''' lines covered by our test. As for the residual '''72''' lines of code, they are related to the public key file and deprecated functions for gathering data for a paper [https://doi.org/10.1109/FIE.2015.7344292] published in 2015.&lt;br /&gt;
&lt;br /&gt;
The code &amp;lt;code&amp;gt;case&amp;lt;/code&amp;gt; under the first &amp;lt;code&amp;gt;if&amp;lt;/code&amp;gt; statement in the &amp;lt;code&amp;gt;send_post_request&amp;lt;/code&amp;gt; method of the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; should be removed. The team in Fall 2020 [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2064._Refactor_reputation_web_service_controller.rb#Issues_to_be_fixed] also mentioned the issues.&lt;br /&gt;
&lt;br /&gt;
If the redundant method &amp;lt;code&amp;gt;send_post-request&amp;lt;/code&amp;gt; in the controller &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; is commented out, the test coverage can achieve '''52.63%'''.&lt;br /&gt;
&lt;br /&gt;
[[File:Coverage-more.png|1080px]]&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
Github Repo : [https://github.com/HenryChen34/expertiza/tree/beta]&lt;br /&gt;
&lt;br /&gt;
Pull request: [https://github.com/expertiza/expertiza/pull/2128]&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
# In send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Current test cases are implemented only based on round 1 reputation scoring even though the assignment_1 and assignment_2 are subjective to be 2 rounds of review assignment. As the result of no accessibility to reputation web service, creation of round 2 object is meaningless because of the absence of round 1 reputation score. Therefore, future test cases need to stub the behavior of fulfilling assignment_questionnaire_1_2 (2nd round questionnaire) and assignment_questionnaire_2_2 (2nd round questionnaire) respectively assuming reputation web service available that time.  &lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142414</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142414"/>
		<updated>2021-11-30T09:51:53Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Future Tasks */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== aes_encrypt &amp;amp; aes_decrypt ==== &lt;br /&gt;
&lt;br /&gt;
These two methods are counterparts of each other.  Instead of testing them separately, we test both in the '''same''' rspect context. We generate a random string mixed with numbers and then encrypt it with the method &amp;lt;code&amp;gt;aes_encrypt&amp;lt;/code&amp;gt; and receive the tuple &amp;lt;code&amp;gt;[cipher, key, iv]&amp;lt;/code&amp;gt;. Then the test invokes the &amp;lt;code&amp;gt;aes_decrypt&amp;lt;/code&amp;gt; method with the tuples to retrieve the plain text. Finally, the test checks whether the decrypted text is the same as the original random data. The test covers both the &amp;lt;code&amp;gt;aes_encrypt&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;aes_decrpyt&amp;lt;/code&amp;gt; methods in the reputation web service controller.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'test aes_decrypt' do&lt;br /&gt;
  it 'return the correct plain text' do&lt;br /&gt;
    data = (0...8).map { (65 + rand(26)).chr }.join&lt;br /&gt;
    cipher, key, iv = ReputationWebServiceController.new.aes_encrypt(data)&lt;br /&gt;
    plain = ReputationWebServiceController.new.aes_decrypt(cipher, key, iv)&lt;br /&gt;
    expect(plain).to eq(data)&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
All of current examples passed the test. There are '''5''' examples in the &amp;lt;code&amp;gt;reputation_web_service_controller_spec.rb&amp;lt;/code&amp;gt; file and there is no failure.&lt;br /&gt;
&lt;br /&gt;
[[File:E2168-test-passed.png|600px]]&lt;br /&gt;
&lt;br /&gt;
There are total '''10''' method in the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; controller:&lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# db_query_with_quiz_score&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# rsa_public_key1&lt;br /&gt;
# rsa_private_key2&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
Our test covers '''7''' of them: &lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rsa_public_key1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;rsa_public_key2&amp;lt;/code&amp;gt; methods requires the public key file 	&lt;br /&gt;
'''''public1.pem''''' and the private key file '''''private1.pem'''''. However, these two files are missing and therefore we could not test the related two methods.&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
The test coverage boosts from '''0%''' to '''50.31%'''.&lt;br /&gt;
&lt;br /&gt;
[[File:Test-coverage.png|1080px]]&lt;br /&gt;
&lt;br /&gt;
There is no testing implemented for the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; prior to our work and therefore the previous testing coverage is '''0%'''.&lt;br /&gt;
&lt;br /&gt;
From the coverage report generated by the &amp;lt;code&amp;gt;Simplecov&amp;lt;/code&amp;gt; gem, there are '''80''' lines covered by our test. As for the residual '''72''' lines of code, they are related to the public key file and deprecated functions for gathering data for a paper [https://doi.org/10.1109/FIE.2015.7344292] published in 2015.&lt;br /&gt;
&lt;br /&gt;
The code &amp;lt;code&amp;gt;case&amp;lt;/code&amp;gt; under the first &amp;lt;code&amp;gt;if&amp;lt;/code&amp;gt; statement in the &amp;lt;code&amp;gt;send_post_request&amp;lt;/code&amp;gt; method of the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; should be removed. The team in Fall 2020 [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2064._Refactor_reputation_web_service_controller.rb#Issues_to_be_fixed] also mentioned the issues.&lt;br /&gt;
&lt;br /&gt;
If the redundant method &amp;lt;code&amp;gt;send_post-request&amp;lt;/code&amp;gt; in the controller &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; is commented out, the test coverage can achieve '''52.63%'''.&lt;br /&gt;
&lt;br /&gt;
[[File:Coverage-more.png|1080px]]&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
Github Repo : [https://github.com/HenryChen34/expertiza/tree/beta]&lt;br /&gt;
&lt;br /&gt;
Pull request: [https://github.com/expertiza/expertiza/pull/2128]&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# In send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Current test cases are implemented only based on round 1 reputation scoring even though the assignment_1 and assignment_2 are subjective to be 2 rounds of review assignment. As the result of no accessibility to reputation web service, creation of round 2 object is meaningless because of the absence of round 1 reputation score. Therefore, future test cases need to stub the behavior of fulfilling assignment_questionnaire_1_2 (2nd round questionnaire) and assignment_questionnaire_2_2 (2nd round questionnaire) respectively assuming reputation web service available that time.  &lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142413</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142413"/>
		<updated>2021-11-30T09:50:53Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* send_post_request */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== aes_encrypt &amp;amp; aes_decrypt ==== &lt;br /&gt;
&lt;br /&gt;
These two methods are counterparts of each other.  Instead of testing them separately, we test both in the '''same''' rspect context. We generate a random string mixed with numbers and then encrypt it with the method &amp;lt;code&amp;gt;aes_encrypt&amp;lt;/code&amp;gt; and receive the tuple &amp;lt;code&amp;gt;[cipher, key, iv]&amp;lt;/code&amp;gt;. Then the test invokes the &amp;lt;code&amp;gt;aes_decrypt&amp;lt;/code&amp;gt; method with the tuples to retrieve the plain text. Finally, the test checks whether the decrypted text is the same as the original random data. The test covers both the &amp;lt;code&amp;gt;aes_encrypt&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;aes_decrpyt&amp;lt;/code&amp;gt; methods in the reputation web service controller.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'test aes_decrypt' do&lt;br /&gt;
  it 'return the correct plain text' do&lt;br /&gt;
    data = (0...8).map { (65 + rand(26)).chr }.join&lt;br /&gt;
    cipher, key, iv = ReputationWebServiceController.new.aes_encrypt(data)&lt;br /&gt;
    plain = ReputationWebServiceController.new.aes_decrypt(cipher, key, iv)&lt;br /&gt;
    expect(plain).to eq(data)&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
All of current examples passed the test. There are '''5''' examples in the &amp;lt;code&amp;gt;reputation_web_service_controller_spec.rb&amp;lt;/code&amp;gt; file and there is no failure.&lt;br /&gt;
&lt;br /&gt;
[[File:E2168-test-passed.png|600px]]&lt;br /&gt;
&lt;br /&gt;
There are total '''10''' method in the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; controller:&lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# db_query_with_quiz_score&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# rsa_public_key1&lt;br /&gt;
# rsa_private_key2&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
Our test covers '''7''' of them: &lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rsa_public_key1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;rsa_public_key2&amp;lt;/code&amp;gt; methods requires the public key file 	&lt;br /&gt;
'''''public1.pem''''' and the private key file '''''private1.pem'''''. However, these two files are missing and therefore we could not test the related two methods.&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
The test coverage boosts from '''0%''' to '''50.31%'''.&lt;br /&gt;
&lt;br /&gt;
[[File:Test-coverage.png|1080px]]&lt;br /&gt;
&lt;br /&gt;
There is no testing implemented for the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; prior to our work and therefore the previous testing coverage is '''0%'''.&lt;br /&gt;
&lt;br /&gt;
From the coverage report generated by the &amp;lt;code&amp;gt;Simplecov&amp;lt;/code&amp;gt; gem, there are '''80''' lines covered by our test. As for the residual '''72''' lines of code, they are related to the public key file and deprecated functions for gathering data for a paper [https://doi.org/10.1109/FIE.2015.7344292] published in 2015.&lt;br /&gt;
&lt;br /&gt;
The code &amp;lt;code&amp;gt;case&amp;lt;/code&amp;gt; under the first &amp;lt;code&amp;gt;if&amp;lt;/code&amp;gt; statement in the &amp;lt;code&amp;gt;send_post_request&amp;lt;/code&amp;gt; method of the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; should be removed. The team in Fall 2020 [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2064._Refactor_reputation_web_service_controller.rb#Issues_to_be_fixed] also mentioned the issues.&lt;br /&gt;
&lt;br /&gt;
If the redundant method &amp;lt;code&amp;gt;send_post-request&amp;lt;/code&amp;gt; in the controller &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; is commented out, the test coverage can achieve '''52.63%'''.&lt;br /&gt;
&lt;br /&gt;
[[File:Coverage-more.png|1080px]]&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
Github Repo : [https://github.com/HenryChen34/expertiza/tree/beta]&lt;br /&gt;
&lt;br /&gt;
Pull request: [https://github.com/expertiza/expertiza/pull/2128]&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
# Current setup is based on assignment_1, which uses lauw's algorithm to calculate reputation score. Future testing team need to setup similar object set for assignment_2, which uses hamer's algorithm. &lt;br /&gt;
# In send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Current test cases are implemented only based on round 1 reputation scoring even though the assignment_1 and assignment_2 are subjective to be 2 rounds of review assignment. As the result of no accessibility to reputation web service, creation of round 2 object is meaningless because of the absence of round 1 reputation score. Therefore, future test cases need to stub the behavior of fulfilling assignment_questionnaire_1_2 (2nd round questionnaire) and assignment_questionnaire_2_2 (2nd round questionnaire) respectively assuming reputation web service available that time.  &lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142409</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142409"/>
		<updated>2021-11-30T09:40:32Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Testing Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
All of current examples passed the test. There are '''5''' examples in the &amp;lt;code&amp;gt;reputation_web_service_controller_spec.rb&amp;lt;/code&amp;gt; file and there is no failure.&lt;br /&gt;
&lt;br /&gt;
[[File:E2168-test-passed.png|600px]]&lt;br /&gt;
&lt;br /&gt;
There are total '''10''' method in the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; controller:&lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# db_query_with_quiz_score&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# rsa_public_key1&lt;br /&gt;
# rsa_private_key2&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
Our test covers '''7''' of them: &lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rsa_public_key1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;rsa_public_key2&amp;lt;/code&amp;gt; methods requires the public key file 	&lt;br /&gt;
'''''public1.pem''''' and the private key file '''''private1.pem'''''. However, these two files are missing and therefore we could not test the related two methods.&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
The test coverage boosts from '''0%''' to '''50.31%'''.&lt;br /&gt;
&lt;br /&gt;
[[File:Test-coverage.png|1080px]]&lt;br /&gt;
&lt;br /&gt;
There is no testing implemented for the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; prior to our work and therefore the previous testing coverage is '''0%'''.&lt;br /&gt;
&lt;br /&gt;
From the coverage report generated by the &amp;lt;code&amp;gt;Simplecov&amp;lt;/code&amp;gt; gem, there are '''80''' lines covered by our test. As for the residual '''72''' lines of code, they are related to the public key file and deprecated functions for gathering data for a paper [https://doi.org/10.1109/FIE.2015.7344292] published in 2015.&lt;br /&gt;
&lt;br /&gt;
The code &amp;lt;code&amp;gt;case&amp;lt;/code&amp;gt; under the first &amp;lt;code&amp;gt;if&amp;lt;/code&amp;gt; statement in the &amp;lt;code&amp;gt;send_post_request&amp;lt;/code&amp;gt; method of the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; should be removed. The team in Fall 2020 [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2064._Refactor_reputation_web_service_controller.rb#Issues_to_be_fixed] also mentioned the issues.&lt;br /&gt;
&lt;br /&gt;
If the redundant method &amp;lt;code&amp;gt;send_post-request&amp;lt;/code&amp;gt; in the controller &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; is commented out, the test coverage can achieve '''52.63%'''.&lt;br /&gt;
&lt;br /&gt;
[[File:Coverage-more.png|1080px]]&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
Github Repo : [https://github.com/HenryChen34/expertiza/tree/beta]&lt;br /&gt;
&lt;br /&gt;
Pull request: [https://github.com/expertiza/expertiza/pull/2128]&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
# Current setup is based on assignment_1, which uses lauw's algorithm to calculate reputation score. Future testing team need to setup similar object set for assignment_2, which uses hamer's algorithm. &lt;br /&gt;
# In send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Current test cases are implemented only based on round 1 reputation scoring even though the assignment_1 and assignment_2 are subjective to be 2 rounds of review assignment. As the result of no accessibility to reputation web service, creation of round 2 object is meaningless because of the absence of round 1 reputation score. Therefore, future test cases need to stub the behavior of fulfilling assignment_questionnaire_1_2 (2nd round questionnaire) and assignment_questionnaire_2_2 (2nd round questionnaire) respectively assuming reputation web service available that time.  &lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:E2168-test-passed.png&amp;diff=142406</id>
		<title>File:E2168-test-passed.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:E2168-test-passed.png&amp;diff=142406"/>
		<updated>2021-11-30T09:37:29Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: Rspec running result for the Fall 2021 - E2168. Testing - Reputations&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Rspec running result for the Fall 2021 - E2168. Testing - Reputations&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142405</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142405"/>
		<updated>2021-11-30T09:34:44Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Coverage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
All of our tests are passed.&lt;br /&gt;
&lt;br /&gt;
There are total '''10''' method in the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; controller:&lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# db_query_with_quiz_score&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# rsa_public_key1&lt;br /&gt;
# rsa_private_key2&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
Our test covers '''7''' of them: &lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rsa_public_key1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;rsa_public_key2&amp;lt;/code&amp;gt; methods requires the public key file 	&lt;br /&gt;
'''''public1.pem''''' and the private key file '''''private1.pem'''''. However, these two files are missing and therefore we could not test the related two methods.&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
The test coverage boosts from '''0%''' to '''50.31%'''.&lt;br /&gt;
&lt;br /&gt;
[[File:Test-coverage.png|1080px]]&lt;br /&gt;
&lt;br /&gt;
There is no testing implemented for the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; prior to our work and therefore the previous testing coverage is '''0%'''.&lt;br /&gt;
&lt;br /&gt;
From the coverage report generated by the &amp;lt;code&amp;gt;Simplecov&amp;lt;/code&amp;gt; gem, there are '''80''' lines covered by our test. As for the residual '''72''' lines of code, they are related to the public key file and deprecated functions for gathering data for a paper [https://doi.org/10.1109/FIE.2015.7344292] published in 2015.&lt;br /&gt;
&lt;br /&gt;
The code &amp;lt;code&amp;gt;case&amp;lt;/code&amp;gt; under the first &amp;lt;code&amp;gt;if&amp;lt;/code&amp;gt; statement in the &amp;lt;code&amp;gt;send_post_request&amp;lt;/code&amp;gt; method of the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; should be removed. The team in Fall 2020 [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2064._Refactor_reputation_web_service_controller.rb#Issues_to_be_fixed] also mentioned the issues.&lt;br /&gt;
&lt;br /&gt;
If the redundant method &amp;lt;code&amp;gt;send_post-request&amp;lt;/code&amp;gt; in the controller &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; is commented out, the test coverage can achieve '''52.63%'''.&lt;br /&gt;
&lt;br /&gt;
[[File:Coverage-more.png|1080px]]&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
Github Repo : [https://github.com/HenryChen34/expertiza/tree/beta]&lt;br /&gt;
&lt;br /&gt;
Pull request: [https://github.com/expertiza/expertiza/pull/2128]&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
# Current setup is based on assignment_1, which uses lauw's algorithm to calculate reputation score. Future testing team need to setup similar object set for assignment_2, which uses hamer's algorithm. &lt;br /&gt;
# In send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Current test cases are implemented only based on round 1 reputation scoring even though the assignment_1 and assignment_2 are subjective to be 2 rounds of review assignment. As the result of no accessibility to reputation web service, creation of round 2 object is meaningless because of the absence of round 1 reputation score. Therefore, future test cases need to stub the behavior of fulfilling assignment_questionnaire_1_2 (2nd round questionnaire) and assignment_questionnaire_2_2 (2nd round questionnaire) respectively assuming reputation web service available that time.  &lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Test-coverage.png&amp;diff=142404</id>
		<title>File:Test-coverage.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Test-coverage.png&amp;diff=142404"/>
		<updated>2021-11-30T09:30:42Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: Test coverage without commenting redundant code for  CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Test coverage without commenting redundant code for  CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Coverage-more.png&amp;diff=142402</id>
		<title>File:Coverage-more.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Coverage-more.png&amp;diff=142402"/>
		<updated>2021-11-30T09:27:25Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: Highest Test coverage report for CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Highest Test coverage report for CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations.&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142401</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142401"/>
		<updated>2021-11-30T09:25:52Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Coverage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
All of our tests are passed.&lt;br /&gt;
&lt;br /&gt;
There are total '''10''' method in the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; controller:&lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# db_query_with_quiz_score&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# rsa_public_key1&lt;br /&gt;
# rsa_private_key2&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
Our test covers '''7''' of them: &lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rsa_public_key1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;rsa_public_key2&amp;lt;/code&amp;gt; methods requires the public key file 	&lt;br /&gt;
'''''public1.pem''''' and the private key file '''''private1.pem'''''. However, these two files are missing and therefore we could not test the related two methods.&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
The test coverage boosts from '''0%''' to '''52.63%'''.&lt;br /&gt;
[[File:Example.jpg]]&lt;br /&gt;
&lt;br /&gt;
There is no testing implemented for the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; prior to our work and therefore the previous testing coverage is '''0%'''.&lt;br /&gt;
&lt;br /&gt;
From the coverage report generated by the &amp;lt;code&amp;gt;Simplecov&amp;lt;/code&amp;gt; gem, there are '''127''' lines covered by our test. As for the residual '''87''' lines of code, they are related to the public key file and deprecated functions for gathering data for a paper [https://doi.org/10.1109/FIE.2015.7344292] published in 2015.&lt;br /&gt;
&lt;br /&gt;
The method &amp;lt;code&amp;gt;send_post_request&amp;lt;/code&amp;gt; inside the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; should be removed. The team in Fall 2020 [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2064._Refactor_reputation_web_service_controller.rb#Issues_to_be_fixed] also mentioned the issues.&lt;br /&gt;
&lt;br /&gt;
If the redundant method &amp;lt;code&amp;gt;send_post-request&amp;lt;/code&amp;gt; in the controller &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; is commented out, the test coverage can achieve '''87.22%'''.&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
Github Repo : [https://github.com/HenryChen34/expertiza/tree/beta]&lt;br /&gt;
&lt;br /&gt;
Pull request: [https://github.com/expertiza/expertiza/pull/2128]&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
# Current setup is based on assignment_1, which uses lauw's algorithm to calculate reputation score. Future testing team need to setup similar object set for assignment_2, which uses hamer's algorithm. &lt;br /&gt;
# In send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Current test cases are implemented only based on round 1 reputation scoring even though the assignment_1 and assignment_2 are subjective to be 2 rounds of review assignment. As the result of no accessibility to reputation web service, creation of round 2 object is meaningless because of the absence of round 1 reputation score. Therefore, future test cases need to stub the behavior of fulfilling assignment_questionnaire_1_2 (2nd round questionnaire) and assignment_questionnaire_2_2 (2nd round questionnaire) respectively assuming reputation web service available that time.  &lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142400</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142400"/>
		<updated>2021-11-30T09:24:16Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Testing Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
All of our tests are passed.&lt;br /&gt;
&lt;br /&gt;
There are total '''10''' method in the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; controller:&lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# db_query_with_quiz_score&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# rsa_public_key1&lt;br /&gt;
# rsa_private_key2&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
Our test covers '''7''' of them: &lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rsa_public_key1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;rsa_public_key2&amp;lt;/code&amp;gt; methods requires the public key file 	&lt;br /&gt;
'''''public1.pem''''' and the private key file '''''private1.pem'''''. However, these two files are missing and therefore we could not test the related two methods.&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
The test coverage boosts from '''0%''' to '''52.77%'''.&lt;br /&gt;
&lt;br /&gt;
There is no testing implemented for the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; prior to our work and therefore the previous testing coverage is '''0%'''.&lt;br /&gt;
&lt;br /&gt;
From the coverage report generated by the &amp;lt;code&amp;gt;Simplecov&amp;lt;/code&amp;gt; gem, there are '''127''' lines covered by our test. As for the residual '''87''' lines of code, they are related to the public key file and deprecated functions for gathering data for a paper [https://doi.org/10.1109/FIE.2015.7344292] published in 2015.&lt;br /&gt;
&lt;br /&gt;
The method &amp;lt;code&amp;gt;send_post_request&amp;lt;/code&amp;gt; inside the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; should be removed. The team in Fall 2020 [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2064._Refactor_reputation_web_service_controller.rb#Issues_to_be_fixed] also mentioned the issues.&lt;br /&gt;
&lt;br /&gt;
If the redundant method &amp;lt;code&amp;gt;send_post-request&amp;lt;/code&amp;gt; in the controller &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; is commented out, the test coverage can achieve '''87.22%'''.&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
Github Repo : [https://github.com/HenryChen34/expertiza/tree/beta]&lt;br /&gt;
&lt;br /&gt;
Pull request: [https://github.com/expertiza/expertiza/pull/2128]&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
# Current setup is based on assignment_1, which uses lauw's algorithm to calculate reputation score. Future testing team need to setup similar object set for assignment_2, which uses hamer's algorithm. &lt;br /&gt;
# In send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Current test cases are implemented only based on round 1 reputation scoring even though the assignment_1 and assignment_2 are subjective to be 2 rounds of review assignment. As the result of no accessibility to reputation web service, creation of round 2 object is meaningless because of the absence of round 1 reputation score. Therefore, future test cases need to stub the behavior of fulfilling assignment_questionnaire_1_2 (2nd round questionnaire) and assignment_questionnaire_2_2 (2nd round questionnaire) respectively assuming reputation web service available that time.  &lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142399</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142399"/>
		<updated>2021-11-30T09:15:38Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Testing Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
All of our tests are passed.&lt;br /&gt;
&lt;br /&gt;
There are total '''10''' method in the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; controller:&lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# db_query_with_quiz_score&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# rsa_public_key1&lt;br /&gt;
# rsa_private_key2&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
Our test covers '''7''' of them: &lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rsa_public_key1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;rsa_public_key2&amp;lt;/code&amp;gt; methods requires the public key file 	&lt;br /&gt;
'''''public1.pem''''' and the private key file '''''private1.pem'''''. However, they are missing. We have consulted the author Yang Song about the missing file but got no response.&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
The test coverage boosts from '''0%''' to '''52.77%'''.&lt;br /&gt;
&lt;br /&gt;
There is no testing implemented for the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; prior to our work and therefore the previous testing coverage is '''0%'''.&lt;br /&gt;
&lt;br /&gt;
From the coverage report generated by the &amp;lt;code&amp;gt;Simplecov&amp;lt;/code&amp;gt; gem, there are '''127''' lines covered by our test. As for the residual '''87''' lines of code, they are related to the public key file and deprecated functions for gathering data for a paper [https://doi.org/10.1109/FIE.2015.7344292] published in 2015.&lt;br /&gt;
&lt;br /&gt;
The method &amp;lt;code&amp;gt;send_post_request&amp;lt;/code&amp;gt; inside the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; should be removed. The team in Fall 2020 [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2064._Refactor_reputation_web_service_controller.rb#Issues_to_be_fixed] also mentioned the issues.&lt;br /&gt;
&lt;br /&gt;
If the redundant method &amp;lt;code&amp;gt;send_post-request&amp;lt;/code&amp;gt; in the controller &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; is commented out, the test coverage can achieve '''87.22%'''.&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
Github Repo : [https://github.com/HenryChen34/expertiza/tree/beta]&lt;br /&gt;
&lt;br /&gt;
Pull request: [https://github.com/expertiza/expertiza/pull/2128]&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
# Current setup is based on assignment_1, which uses lauw's algorithm to calculate reputation score. Future testing team need to setup similar object set for assignment_2, which uses hamer's algorithm. &lt;br /&gt;
# In send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Current test cases are implemented only based on round 1 reputation scoring even though the assignment_1 and assignment_2 are subjective to be 2 rounds of review assignment. As the result of no accessibility to reputation web service, creation of round 2 object is meaningless because of the absence of round 1 reputation score. Therefore, future test cases need to stub the behavior of fulfilling assignment_questionnaire_1_2 (2nd round questionnaire) and assignment_questionnaire_2_2 (2nd round questionnaire) respectively assuming reputation web service available that time.  &lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142398</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142398"/>
		<updated>2021-11-30T09:15:24Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Testing Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
All of our tests are passed.&lt;br /&gt;
&lt;br /&gt;
There are total '''10''' method in the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; controller:&lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# db_query_with_quiz_score&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# rsa_public_key1&lt;br /&gt;
# rsa_private_key2&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
Our test covers '''7'' of them: &lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rsa_public_key1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;rsa_public_key2&amp;lt;/code&amp;gt; methods requires the public key file 	&lt;br /&gt;
'''''public1.pem''''' and the private key file '''''private1.pem'''''. However, they are missing. We have consulted the author Yang Song about the missing file but got no response.&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
The test coverage boosts from '''0%''' to '''52.77%'''.&lt;br /&gt;
&lt;br /&gt;
There is no testing implemented for the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; prior to our work and therefore the previous testing coverage is '''0%'''.&lt;br /&gt;
&lt;br /&gt;
From the coverage report generated by the &amp;lt;code&amp;gt;Simplecov&amp;lt;/code&amp;gt; gem, there are '''127''' lines covered by our test. As for the residual '''87''' lines of code, they are related to the public key file and deprecated functions for gathering data for a paper [https://doi.org/10.1109/FIE.2015.7344292] published in 2015.&lt;br /&gt;
&lt;br /&gt;
The method &amp;lt;code&amp;gt;send_post_request&amp;lt;/code&amp;gt; inside the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; should be removed. The team in Fall 2020 [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2064._Refactor_reputation_web_service_controller.rb#Issues_to_be_fixed] also mentioned the issues.&lt;br /&gt;
&lt;br /&gt;
If the redundant method &amp;lt;code&amp;gt;send_post-request&amp;lt;/code&amp;gt; in the controller &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; is commented out, the test coverage can achieve '''87.22%'''.&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
Github Repo : [https://github.com/HenryChen34/expertiza/tree/beta]&lt;br /&gt;
&lt;br /&gt;
Pull request: [https://github.com/expertiza/expertiza/pull/2128]&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
# Current setup is based on assignment_1, which uses lauw's algorithm to calculate reputation score. Future testing team need to setup similar object set for assignment_2, which uses hamer's algorithm. &lt;br /&gt;
# In send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Current test cases are implemented only based on round 1 reputation scoring even though the assignment_1 and assignment_2 are subjective to be 2 rounds of review assignment. As the result of no accessibility to reputation web service, creation of round 2 object is meaningless because of the absence of round 1 reputation score. Therefore, future test cases need to stub the behavior of fulfilling assignment_questionnaire_1_2 (2nd round questionnaire) and assignment_questionnaire_2_2 (2nd round questionnaire) respectively assuming reputation web service available that time.  &lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142380</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142380"/>
		<updated>2021-11-30T06:04:50Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Testing Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
All of our tests are passed.&lt;br /&gt;
&lt;br /&gt;
There are total '''10''' method in the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; controller:&lt;br /&gt;
&lt;br /&gt;
# action_allowed?&lt;br /&gt;
# db_query&lt;br /&gt;
# db_query_with_quiz_score&lt;br /&gt;
# json_generator&lt;br /&gt;
# client&lt;br /&gt;
# send_post_request&lt;br /&gt;
# rsa_public_key1&lt;br /&gt;
# rsa_private_key2&lt;br /&gt;
# aes_encrypt&lt;br /&gt;
# aes_decrypt&lt;br /&gt;
&lt;br /&gt;
Our test covers '''3''' of them: &lt;br /&gt;
&lt;br /&gt;
# db_query&lt;br /&gt;
# json_generator&lt;br /&gt;
# send_post_request&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rsa_public_key1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;rsa_public_key2&amp;lt;/code&amp;gt; methods requires the public key file 	&lt;br /&gt;
'''''public1.pem''''' and the private key file '''''private1.pem'''''. However, they are missing. We have consulted the author Yang Song about the missing file but got no response. We cannot proceed on testing the other two related methods, &amp;lt;code&amp;gt;aes_encrypt&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;aes_decrypt&amp;lt;/code&amp;gt;. The '''reputation''' algorithm generates reputation scores for students in the peer-review assignmentsm, however, the method &amp;lt;code&amp;gt;db_query_with_quiz_score&amp;lt;/code&amp;gt; is used when the assignment type is &amp;lt;code&amp;gt;'quiz scores'&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
The test coverage boosts from '''0%''' to '''52.77%'''.&lt;br /&gt;
&lt;br /&gt;
There is no testing implemented for the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; prior to our work and therefore the previous testing coverage is '''0%'''.&lt;br /&gt;
&lt;br /&gt;
From the coverage report generated by the &amp;lt;code&amp;gt;Simplecov&amp;lt;/code&amp;gt; gem, there are '''127''' lines covered by our test. As for the residual '''87''' lines of code, they are related to the public key file and deprecated functions for gathering data for a paper [https://doi.org/10.1109/FIE.2015.7344292] published in 2015.&lt;br /&gt;
&lt;br /&gt;
The method &amp;lt;code&amp;gt;send_post_request&amp;lt;/code&amp;gt; inside the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; should be removed. The team in Fall 2020 [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2064._Refactor_reputation_web_service_controller.rb#Issues_to_be_fixed] also mentioned the issues.&lt;br /&gt;
&lt;br /&gt;
If the redundant method &amp;lt;code&amp;gt;send_post-request&amp;lt;/code&amp;gt; in the controller &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; is commented out, the test coverage can achieve '''87.22%'''.&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
Github Repo : [https://github.com/HenryChen34/expertiza/tree/beta]&lt;br /&gt;
&lt;br /&gt;
Pull request: [https://github.com/expertiza/expertiza/pull/2128]&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
# Current setup is based on assignment_1, which uses lauw's algorithm to calculate reputation score. Future testing team need to setup similar object set for assignment_2, which uses hamer's algorithm. &lt;br /&gt;
# In send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Current test cases are implemented only based on round 1 reputation scoring even though the assignment_1 and assignment_2 are subjective to be 2 rounds of review assignment. As the result of no accessibility to reputation web service, creation of round 2 object is meaningless because of the absence of round 1 reputation score. Therefore, future test cases need to stub the behavior of fulfilling assignment_questionnaire_1_2 (2nd round questionnaire) and assignment_questionnaire_2_2 (2nd round questionnaire) respectively assuming reputation web service available that time.  &lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142379</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142379"/>
		<updated>2021-11-30T05:29:00Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Coverage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
The test coverage boosts from '''0%''' to '''52.77%'''.&lt;br /&gt;
&lt;br /&gt;
There is no testing implemented for the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; prior to our work and therefore the previous testing coverage is '''0%'''.&lt;br /&gt;
&lt;br /&gt;
From the coverage report generated by the &amp;lt;code&amp;gt;Simplecov&amp;lt;/code&amp;gt; gem, there are '''127''' lines covered by our test. As for the residual '''87''' lines of code, they are related to the public key file and deprecated functions for gathering data for a paper [https://doi.org/10.1109/FIE.2015.7344292] published in 2015.&lt;br /&gt;
&lt;br /&gt;
The method &amp;lt;code&amp;gt;send_post_request&amp;lt;/code&amp;gt; inside the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; should be removed. The team in Fall 2020 [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2020_-_E2064._Refactor_reputation_web_service_controller.rb#Issues_to_be_fixed] also mentioned the issues.&lt;br /&gt;
&lt;br /&gt;
If the redundant method &amp;lt;code&amp;gt;send_post-request&amp;lt;/code&amp;gt; in the controller &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; is commented out, the test coverage can achieve '''87.22%'''.&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
Github Repo : [https://github.com/HenryChen34/expertiza/tree/beta]&lt;br /&gt;
&lt;br /&gt;
Pull request: [https://github.com/expertiza/expertiza/pull/2128]&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
# Current setup is based on assignment_1, which uses lauw's algorithm to calculate reputation score. Future testing team need to setup similar object set for assignment_2, which uses hamer's algorithm. &lt;br /&gt;
# In send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Current test cases are implemented only based on round 1 reputation scoring even though the assignment_1 and assignment_2 are subjective to be 2 rounds of review assignment. As the result of no accessibility to reputation web service, creation of round 2 object is meaningless because of the absence of round 1 reputation score. Therefore, future test cases need to stub the behavior of fulfilling assignment_questionnaire_1_2 (2nd round questionnaire) and assignment_questionnaire_2_2 (2nd round questionnaire) respectively assuming reputation web service available that time.  &lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142354</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142354"/>
		<updated>2021-11-30T04:57:19Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Coverage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
The test coverage boosts from '''0%''' to '''52.77%'''.&lt;br /&gt;
&lt;br /&gt;
There is no testing implemented for the &amp;lt;code&amp;gt;reputation_web_service_controller.rb&amp;lt;/code&amp;gt; prior to our work and therefore the previous testing coverage is '''0'''.&lt;br /&gt;
&lt;br /&gt;
From the coverage report generated by the &amp;lt;code&amp;gt;Simplecov&amp;lt;/code&amp;gt; gem, there are '''127''' lines covered by our test. As for the residual '''87''' lines of code, they are related to the public key file and deprecated functions for the&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
Github Repo : [https://github.com/HenryChen34/expertiza/tree/beta]&lt;br /&gt;
&lt;br /&gt;
Pull request: [https://github.com/expertiza/expertiza/pull/2128]&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
# Current setup is based on assignment_1, which uses lauw's algorithm to calculate reputation score. Future testing team need to setup similar object set for assignment_2, which uses hamer's algorithm. &lt;br /&gt;
# In send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Current test cases are implemented only based on round 1 reputation scoring even though the assignment_1 and assignment_2 are subjective to be 2 rounds of review assignment. As the result of no accessibility to reputation web service, creation of round 2 object is meaningless because of the absence of round 1 reputation score. Therefore, future test cases need to stub the behavior of fulfilling assignment_questionnaire_1_2 (2nd round questionnaire) and assignment_questionnaire_2_2 (2nd round questionnaire) respectively assuming reputation web service available that time.  &lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142341</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142341"/>
		<updated>2021-11-30T04:41:59Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Relevant Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
Github Repo : [https://github.com/HenryChen34/expertiza/tree/beta]&lt;br /&gt;
&lt;br /&gt;
Pull request: [https://github.com/expertiza/expertiza/pull/2128]&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
# Current setup is based on assignment_1, which uses lauw's algorithm to calculate reputation score. Future testing team need to setup similar object set for assignment_2, which uses hamer's algorithm. &lt;br /&gt;
# In  send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Round 1 and round 2?&lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142340</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142340"/>
		<updated>2021-11-30T04:41:25Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Relevant Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
Github Repo : [https://github.com/HenryChen34/expertiza/tree/beta]&lt;br /&gt;
Pull request: [https://github.com/expertiza/expertiza/pull/2128]&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
# Current setup is based on assignment_1, which uses lauw's algorithm to calculate reputation score. Future testing team need to setup similar object set for assignment_2, which uses hamer's algorithm. &lt;br /&gt;
# In  send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Round 1 and round 2?&lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142318</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=142318"/>
		<updated>2021-11-30T04:28:57Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;q&amp;gt;Online peer-review systems are now in common use in higher education. They free the instructor and course staff from having to provide personally all the feedback that students receive on their work. However, if we want to assure that all students receive competent feedback, or even use peer-assigned grades, we need a way to judge which peer reviewers are most credible. The solution is to use a reputation system.&amp;lt;/q&amp;gt; &amp;lt;br&amp;gt; The reputation system is meant to provide objective value to student assigned peer review scores. Students select from a list of tasks to be performed and then prepare their work and submit it to a peer-review system. The work is then reviewed by other students who offer comments/graded feedback to help the submitters improve their work.&lt;br /&gt;
During the peer review period it is important to determine which reviews are more accurate and show higher quality. Reputation is one way to achieve this goal; it is a quantization measurement to judge which peer reviewers are more reliable.&lt;br /&gt;
Peer reviewers can use expertiza to score an author. If Expertiza shows a confidence ratings for grades based upon the reviewers reputation then authors can more easily determine the legitimacy of the peer assigned score. In addition, the teaching staff can examine the quality of each peer review based on reputation values and, potentially, crowd-source a significant portion of the grading function.&lt;br /&gt;
Currently the reputation system is implemented in Expertiza through web-service, but there's no test written for it. Thus our goal is to set up assignments and reviews that would produce specific reputation scores, and test that the correct reputations are in fact being produced.&lt;br /&gt;
&lt;br /&gt;
=== System Design ===&lt;br /&gt;
The below is referenced from project E1625, which would give us the overall description of the reputation system.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Diagram E1625.png|1000px]]&amp;lt;br&amp;gt;&lt;br /&gt;
There are two algorithms intended for use in calculation of the reputation values for participants. &lt;br /&gt;
&lt;br /&gt;
There is a [https://expertiza.ncsu.edu/reputation_web_service/client web-service] (the link accessible only to the instructors) available which serves a JSON response containing the reputation value based on the seed provided in the form of the last known reputation value which we store in the ''participants'' table. An instructor can specify which algorithm to use for a particular assignment to calculate the confidence rating. &lt;br /&gt;
&lt;br /&gt;
As the [https://docs.google.com/viewer?url=https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FYang_Song36%2Fpublication%2F289528736_Pluggable_Reputation_Systems_for_Peer_Review_a_Web-Service_Approach%2Flinks%2F568ec8d008ae78cc05160aed.pdf%3FinViewer%3D0%26pdfJsDownload%3D0%26origin%3Dpublication_detail paper] on reputation system by observes, “the Hamer-peer algorithm has the lowest maximum absolute bias and the Lauw-peer algorithm has the lowest overall bias.This indicates, from the instructor’s perspective, if there are further assignments of this kind, expert grading may not be necessary.”&lt;br /&gt;
:;Reputation range of Hamer’s algorithm is :&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value &amp;lt; 0.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.5 and &amp;lt;= 1&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 1 and &amp;lt;= 1.5&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 1.5 and &amp;lt;= 2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The main difference between the Hamer-peer and the Lauw-peer algorithm is that the Lauw-peer algorithm keeps track of the reviewer's leniency (“bias”), which can be either positive or negative. A positive leniency indicates the reviewer tends to give higher scores than average.  This project determines reputation by subtracting the absolute value of the leniency from 1. Additionally, the range for Hamer’s algorithm is (0,∞) while for Lauw’s algorithm it is [0,1]. &lt;br /&gt;
:;Reputation range of Lauw’s algorithm is&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;                   value is &amp;lt; 0.2&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: yellow&amp;quot;&amp;gt;yellow&amp;lt;/span&amp;gt;              value is &amp;gt;= 0.2 and &amp;lt;= 0.4&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: orange&amp;quot;&amp;gt;orange&amp;lt;/span&amp;gt;             value is &amp;gt; 0.4 and &amp;lt;= 0.6&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: HoneyDew&amp;quot;&amp;gt;light green&amp;lt;/span&amp;gt;       value is &amp;gt; 0.6 and &amp;lt;= 0.8&lt;br /&gt;
:&amp;lt;span style=&amp;quot;background-color: Chartreuse&amp;quot;&amp;gt;green&amp;lt;/span&amp;gt;               value is &amp;gt; 0.8&lt;br /&gt;
&lt;br /&gt;
The instructor can choose to show results from Hamer’s algorithm or Lauw’s algorithm. The default algorithm should be Lauw’s algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score using the Hamer's and Lauw's algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Setup Testing Objects ===&lt;br /&gt;
In order to implement testing on reputation, it is crucial to create sample reviews so that we could possibly obtain reputation score. During the kickoff meeting, our team defined four necessary steps to follow for the purpose of testing. Also, appropriate objects could be created and confined as discussed below.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
==== Assignment ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
submitter_count = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviewers = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
num_reviews_allowed = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
rounds_of_reviews = 2;&amp;lt;br&amp;gt;&lt;br /&gt;
reputation_algorithm = lauw/hamer;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: Two assignment objects were created. Assignment_1 used lauw's algorithm, whereas assignment_2 used hamer's alogorithm.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @assignment_1 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'lauw', id: 1)&lt;br /&gt;
  @assignment_2 = create(:assignment, created_at: DateTime.now.in_time_zone - 13.day, submitter_count: 0, num_reviews: 3, num_reviewers: 5, num_reviews_allowed: 5, rounds_of_reviews: 2, reputation_algorithm: 'hamer', id: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questionnaires(Rubrics) ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
min_question_score = 0;&amp;lt;br&amp;gt;&lt;br /&gt;
max_question_score = 5;&amp;lt;br&amp;gt;&lt;br /&gt;
type = ReviewQuestionnaire;&amp;lt;br&amp;gt;&lt;br /&gt;
Note: We will define the assignment with ReviewQuestionnaire type rubric. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @questionnaire_1 = create(:questionnaire, min_question_score: 0, max_question_score: 5, type: 'ReviewQuestionnaire', id: 1)&lt;br /&gt;
  # assignment_questionnaire_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means assignment #I'd #j th round of review.&lt;br /&gt;
  @assignment_questionnaire_1_1 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_1_2 = create(:assignment_questionnaire, assignment_id: @assignment_1.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
  @assignment_questionnaire_2_1 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 1)&lt;br /&gt;
  @assignment_questionnaire_2_2 = create(:assignment_questionnaire, assignment_id: @assignment_2.id, questionnaire_id: @questionnaire_1.id, used_in_round: 2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Questions under Questionnaires ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[questions]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # question_i_j means question #j in questionnaire #i.&lt;br /&gt;
  @question_1_1 = create(:question, questionnaire_id: @questionnaire_1.id, id: 1)&lt;br /&gt;
  @question_1_2 = create(:question, questionnaire_id: @questionnaire_1.id, id: 2)&lt;br /&gt;
  @question_1_3 = create(:question, questionnaire_id: @questionnaire_1.id, id: 3)&lt;br /&gt;
  @question_1_4 = create(:question, questionnaire_id: @questionnaire_1.id, id: 4)&lt;br /&gt;
  @question_1_5 = create(:question, questionnaire_id: @questionnaire_1.id, id: 5)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reviewers and Reviewees ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers (Participant):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewer_1 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_2 = create(:participant, can_review: 1)&lt;br /&gt;
  @reviewer_3 = create(:participant, can_review: 1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reviewees (Teams):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  @reviewee_1 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_2 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
  @reviewee_3 = create(:assignment_team, assignment: @assignment)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Responses ====&lt;br /&gt;
* Objects involved &amp;lt;br&amp;gt;&lt;br /&gt;
[[response_maps]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Essential Parameters to be configured&amp;lt;br&amp;gt;&lt;br /&gt;
reviewed_object_id = &amp;lt;target_assignment&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewer_id = &amp;lt;target_reviewer&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
reviewee_id = &amp;lt;target_reviewee&amp;gt;.id ; &amp;lt;br&amp;gt;&lt;br /&gt;
Note: The response map is setup to determine the relationship between reviewer and reviewee of an assignment.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Code Implemented &amp;lt;br&amp;gt;&lt;br /&gt;
Response_maps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_map_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_map_1_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
  @response_map_1_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_1.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_2_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
  @response_map_2_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_2.id)&lt;br /&gt;
&lt;br /&gt;
  @response_map_3_1 = create(:review_response_map, reviewer_id: @reviewer_1.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_2 = create(:review_response_map, reviewer_id: @reviewer_2.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
  @response_map_3_3 = create(:review_response_map, reviewer_id: @reviewer_3.id, reviewee_id: @reviewee_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Responses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # response_&amp;lt;i&amp;gt;_&amp;lt;j&amp;gt; means response of reviewer #j to reviewee #i. &lt;br /&gt;
  @response_1_1 = create(:response, is_submitted: true, map_id: @response_map_1_1.id)&lt;br /&gt;
  @response_1_2 = create(:response, is_submitted: true, map_id: @response_map_1_2.id)&lt;br /&gt;
  @response_1_3 = create(:response, is_submitted: true, map_id: @response_map_1_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_2_1 = create(:response, is_submitted: true, map_id: @response_map_2_1.id)&lt;br /&gt;
  @response_2_2 = create(:response, is_submitted: true, map_id: @response_map_2_2.id)&lt;br /&gt;
  @response_2_3 = create(:response, is_submitted: true, map_id: @response_map_2_3.id)&lt;br /&gt;
&lt;br /&gt;
  @response_3_1 = create(:response, is_submitted: true, map_id: @response_map_3_1.id)&lt;br /&gt;
  @response_3_2 = create(:response, is_submitted: true, map_id: @response_map_3_2.id)&lt;br /&gt;
  @response_3_3 = create(:response, is_submitted: true, map_id: @response_map_3_3.id)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== db_query ====&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. &lt;br /&gt;
1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test db_query' do&lt;br /&gt;
    it 'return average score' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [1, 2, 3, 4, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 2)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 4)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      result = ReputationWebServiceController.new.db_query(1, 1, false)&lt;br /&gt;
      #expect to see a data array return generated by the score given.&lt;br /&gt;
      expect(result).to eq([[2, 1, 60.0]])&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== json_generator ====&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test json_generator' do&lt;br /&gt;
    it 'test 3 reviewer for one reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_2's review for reviewee_1: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_2.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_3's review for reviewee_1: [1, 1, 1, 1, 1]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_3.id, answer: 1)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq({&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0, &amp;quot;stu3&amp;quot;=&amp;gt;60.0, &amp;quot;stu4&amp;quot;=&amp;gt;20.0}})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    it 'test same reviewer for different reviewee' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # reivewer_1's review for reviewee_2: [3, 3, 3, 3, 3]&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_2_1.id, answer: 3)&lt;br /&gt;
&lt;br /&gt;
      result = ReputationWebServiceController.new.json_generator(1, 0, 1)&lt;br /&gt;
      expect(result).to eq(&amp;quot;submission1&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;100.0}, &amp;quot;submission2&amp;quot;=&amp;gt;{&amp;quot;stu2&amp;quot;=&amp;gt;60.0})&lt;br /&gt;
      #repeat for different answers&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== send_post_request ==== &lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the update value in database.&lt;br /&gt;
&lt;br /&gt;
Note that this method is not functioning due to the missing public &amp;amp; private key file for RSA encryption. The algorithm involving &amp;quot;expert grade&amp;quot; was also not implemented. &lt;br /&gt;
Thus, this method could not be properly tested. However we have given the template to create tests for future use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  context 'test send_post_request' do&lt;br /&gt;
    it 'failed because of no public key file' do&lt;br /&gt;
      # reivewer_1's review for reviewee_1: [5, 5, 5, 5, 5]&lt;br /&gt;
      # create 5 answers for 5 related questions&lt;br /&gt;
      create(:answer, question_id: @question_1_1.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_2.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_3.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_4.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
      create(:answer, question_id: @question_1_5.id, response_id: @response_1_1.id, answer: 5)&lt;br /&gt;
&lt;br /&gt;
      # choose hammer algorithm without expert grade(intructor's given grade)&lt;br /&gt;
      params = {assignment_id: 1, round_num: 1, algorithm: 'hammer', checkbox: {expert_grade: &amp;quot;empty&amp;quot;}}&lt;br /&gt;
      session = {user: build(:instructor, id: 1)}&lt;br /&gt;
&lt;br /&gt;
      expect(true).to eq(true)&lt;br /&gt;
&lt;br /&gt;
      # comment out because send_post_request method request public key file while this file is missing&lt;br /&gt;
      # so at this time send_post_request is not functioning normally&lt;br /&gt;
      # if it functions correctly, it will update the reviewer's reputation score according to the selected reputation algorithm.&lt;br /&gt;
&lt;br /&gt;
      # get :send_post_request, params, session&lt;br /&gt;
      # expect(response).to redirect_to '/reputation_web_service/client'&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Testing Results ===&lt;br /&gt;
&lt;br /&gt;
=== Coverage ===&lt;br /&gt;
&lt;br /&gt;
=== Relevant Links ===&lt;br /&gt;
&lt;br /&gt;
== Future Tasks ==&lt;br /&gt;
As our team could not obtain the public/private key pair to access the reputation web service, we were only able get to the step prior to sending the JSON to the web service of reputation algorithms. Therefore, future steps are required to test on reputation system. &lt;br /&gt;
&lt;br /&gt;
# Current setup is based on assignment_1, which uses lauw's algorithm to calculate reputation score. Future testing team need to setup similar object set for assignment_2, which uses hamer's algorithm. &lt;br /&gt;
# In  send_post_request, there are references to specific assignments, such as 724, 735, and 756. They were put in to gather data for a paper published in 2015. They are no longer relevant and should be removed. (Need to replace by the comment Professor mentioned in the meeting today)&lt;br /&gt;
# Round 1 and round 2?&lt;br /&gt;
# The db_query violates the DRY principle as it repetitively calculates sum of the assignment. Such sum calculation should be handled in the assignment.rb.&lt;br /&gt;
&lt;br /&gt;
== Collaborators ==&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=140878</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=140878"/>
		<updated>2021-11-04T03:08:14Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Testing Objects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
Our objectives for this project are the following:&lt;br /&gt;
* Double and stub an assignment, a few submissions to the assignment, under different review rubrics&lt;br /&gt;
* Manually calculate reputation scores based on paper &amp;quot;Pluggable reputation systems for peer review: A web-service approach&amp;quot;&lt;br /&gt;
* Validate correct reputation scores based on different review rubrics generated by reputation management VS manual computation of reputation score expectation&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
*reputation_web_service_controller_spec&lt;br /&gt;
*hamer_spec&lt;br /&gt;
*&amp;lt;del&amp;gt;lauw_spec (?)&amp;lt;/del&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Test Plan ==&lt;br /&gt;
&lt;br /&gt;
=== Testing Objects ===&lt;br /&gt;
Following objects will be created and confined for the purpose of testing. &lt;br /&gt;
&lt;br /&gt;
*[[assignments]]&amp;lt;br&amp;gt;&lt;br /&gt;
*[[submission_records]]&amp;lt;br&amp;gt;&lt;br /&gt;
*[[questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
*[[assignment_questionnaires]]&amp;lt;br&amp;gt;&lt;br /&gt;
*[[participants]]&amp;lt;br&amp;gt;&lt;br /&gt;
*[[teams]]&amp;lt;br&amp;gt;&lt;br /&gt;
*[[teams_users]]&amp;lt;br&amp;gt;&lt;br /&gt;
*[[users]]&amp;lt;br&amp;gt;&lt;br /&gt;
*[[responses]]&amp;lt;br&amp;gt;&lt;br /&gt;
*[[response_maps]]&lt;br /&gt;
&lt;br /&gt;
The manifestation of each object will contribute to the success of the following test on reputations. Some fields in each object can be empty or have default values. Some attributes are not relevant to the test. When implementing the test, the test scripts need to generate or set fixed values for corresponding fields.&lt;br /&gt;
&lt;br /&gt;
=== Relevant Methods ===&lt;br /&gt;
==== hamer_spec ====&lt;br /&gt;
*calculate_weighted_scores_and_reputation&lt;br /&gt;
This is the main method of Hamer algorithm. Reviewers inaccuracy will be calculated and updated from past average, then weight of the review will be adjusted based on the inaccuracy. We plan to create two contexts to test it as below: 1. When the review is the same from average, reviewer’s inaccuracy should be minimized and the review’s weight should be maximized; 2. When the review is furthest from average, reviewer’s inaccuracy should be maximized and the review’s weight should be minimized.&lt;br /&gt;
*converged?&lt;br /&gt;
This method ensures the final result of calculated reviews' weights is converged, as the algorithm requires. A simple fixture test can be done by giving the method a set of input data and compare it to expected result.&lt;br /&gt;
==== ReputationController_spec ====&lt;br /&gt;
*db_query&lt;br /&gt;
This is the normal db query method, call this method will return peer review grades with given assignment id.&lt;br /&gt;
We will test this method in two aspect. 1. Test whether or not the grade return is right based on the specified algorithm.&lt;br /&gt;
2. We need to test the correctness of the query.&lt;br /&gt;
&lt;br /&gt;
*db_query_with_quiz_score&lt;br /&gt;
This is the special db query, call this method will return quiz scores with given assignment id. We will test this method with same logic as the first one.&lt;br /&gt;
&lt;br /&gt;
*json_generator&lt;br /&gt;
This method will generate the hash format of the review, we will test this method by calling to and convert the result to json format the print to test its correctness.&lt;br /&gt;
&lt;br /&gt;
*client&lt;br /&gt;
client method will fill many instance variables with corresponding class variables, we need to test this method with send_post_request.&lt;br /&gt;
&lt;br /&gt;
*send_post_request&lt;br /&gt;
This method send a post request to peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms to calculate get the reputation result and use show the result &lt;br /&gt;
in corresponding UI and update given reviewer's reputation. We will test this method based on the algorithm in the paper, first to test the result reputation value, second to test the&lt;br /&gt;
update value in database.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Collaborators ===&lt;br /&gt;
Jinku Cui (jcui23)&lt;br /&gt;
&lt;br /&gt;
Henry Chen (hchen34)&lt;br /&gt;
&lt;br /&gt;
Dong Li (dli35)&lt;br /&gt;
&lt;br /&gt;
Zijun Lu (zlu5)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021&amp;diff=140874</id>
		<title>CSC/ECE 517 Fall 2021</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021&amp;diff=140874"/>
		<updated>2021-11-04T02:50:01Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Final Projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OSS Projects ==&lt;br /&gt;
&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2117. Refactor questionaires_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2128. Refactor student_quizzes_controller.rb &amp;amp; late_policies_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2129. Refactor auth_controller.rb &amp;amp; password_retrieval_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2132. Add tests cases for review mapping helper.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2134. Write unit tests for admin_controller.rb and institution_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2138. Auto-generate submission directory names based on assignment]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2142. Improve e-mail notifications]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2133. Write tests for popup_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2120. Refactor reputation_web_service_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2139. Remove multiple topics at a time]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2131. Improve assessment360_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2121. Refactor suggestion_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2122. Refactor impersonate_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2123. Refactor sign_up_sheet_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2126. Refactor account_request_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2124. Refactor review_mapping_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2125. Refactor review_mapping_helper.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2127. Refactor teams_controller]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2130. Refactor submitted_content_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2140. Create new late policy successfully and fix Bank link]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2141. OSS project Finklestein: Instructors &amp;amp; Institutions]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2144. Refactor delayed mailer and scheduled task]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2147. Role-based reviewing]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2145. OSS Project Beige]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2146. Introduce a Student View for instructors]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - Refactor Evaluation of SQL Queries]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2135. Email notification to reviewers and instructors]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2149. Finish Github metrics integration - Reputations]]&lt;br /&gt;
&lt;br /&gt;
== Final Projects ==&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2166._Testing_-_Scoring_%26_Grades#Description_about_project CSC/ECE 517 Fall 2021 - E2166. Testing - Scoring_and_Grades]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2150._Integrate_suggestion_detection_algorithm#Description_about_project CSC/ECE 517 Fall 2021 - E2150. Integrate suggestion detection algorithm]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2151._Allow_reviewers_to_bid_on_what_to_review CSC/ECE 517 Fall 2021 - E2151. Allow reviewers to bid on what to review]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2152._Revision_planning_tool#Description_about_project CSC/ECE 517 Fall 2021 - E2152. Revision_planning_tool]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2170._Testing_-_Response_Maps#Description_about_project CSC/ECE 517 Fall 2021 - E2170. Testing - Response Maps]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2153._Improving_search_facility_in_Expertiza#Description_about_project CSC/ECE 517 Fall 2021 - E2153. Improving search facility in Expertiza]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2162._Further_refactoring_and_improvement_of_review_mapping_helper CSC/ECE 517 Fall 2021 - E2162. Further refactoring and improvement of review mapping helper]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2163._Refactor_waitlist_functionality CSC/ECE 517 Fall 2021 - E2163.  Refactor waitlist functionality]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2165._Fix_teammate-review_view CSC/ECE 517 Fall 2021 - E2165. Fix teammate review view ]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2155._Calibration_submissions_should_be_copied_along_with_calibration_assignments CSC/ECE 517 Fall 2021 - E2155. Calibration submissions should be copied along with calibration assignments]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2148._Completion/Progress_View CSC/ECE 517 Fall 2021 - E2148. Completion/Progress view]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2158._Grading_audit_trail CSC/ECE 517 Fall 2021 - E2158. Grading audit trail ]&lt;br /&gt;
*[https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2159._Expertiza_internationalization CSC/ECE 517 Fall 2021 - E2159. Expertiza internationalization]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2160._Implementing_and_testing_import_export_controllers#Description_about_project CSC/ECE 517 Fall 2021 - E2160. Implementing and testing import and export controllers]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2164._Heatgrid_fixes_and_improvements#Description_about_project CSC/ECE 517 Fall 2021 - E2164. Heatgrid fixes and improvements]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations]]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2126._Testing_-_Team_Related_Files#Description_about_project CSC/ECE 517 Fall 2021 - E2126. Testing - Team Related Files]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021&amp;diff=140872</id>
		<title>CSC/ECE 517 Fall 2021</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021&amp;diff=140872"/>
		<updated>2021-11-04T02:49:07Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* OSS Projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OSS Projects ==&lt;br /&gt;
&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2117. Refactor questionaires_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2128. Refactor student_quizzes_controller.rb &amp;amp; late_policies_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2129. Refactor auth_controller.rb &amp;amp; password_retrieval_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2132. Add tests cases for review mapping helper.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2134. Write unit tests for admin_controller.rb and institution_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2138. Auto-generate submission directory names based on assignment]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2142. Improve e-mail notifications]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2133. Write tests for popup_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2120. Refactor reputation_web_service_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2139. Remove multiple topics at a time]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2131. Improve assessment360_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2121. Refactor suggestion_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2122. Refactor impersonate_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2123. Refactor sign_up_sheet_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2126. Refactor account_request_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2124. Refactor review_mapping_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2125. Refactor review_mapping_helper.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2127. Refactor teams_controller]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2130. Refactor submitted_content_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2140. Create new late policy successfully and fix Bank link]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2141. OSS project Finklestein: Instructors &amp;amp; Institutions]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2144. Refactor delayed mailer and scheduled task]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2147. Role-based reviewing]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2145. OSS Project Beige]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2146. Introduce a Student View for instructors]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - Refactor Evaluation of SQL Queries]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2135. Email notification to reviewers and instructors]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2149. Finish Github metrics integration - Reputations]]&lt;br /&gt;
&lt;br /&gt;
== Final Projects ==&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2166._Testing_-_Scoring_%26_Grades#Description_about_project CSC/ECE 517 Fall 2021 - E2166. Testing - Scoring_and_Grades]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2150._Integrate_suggestion_detection_algorithm#Description_about_project CSC/ECE 517 Fall 2021 - E2150. Integrate suggestion detection algorithm]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2151._Allow_reviewers_to_bid_on_what_to_review CSC/ECE 517 Fall 2021 - E2151. Allow reviewers to bid on what to review]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2152._Revision_planning_tool#Description_about_project CSC/ECE 517 Fall 2021 - E2152. Revision_planning_tool]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2170._Testing_-_Response_Maps#Description_about_project CSC/ECE 517 Fall 2021 - E2170. Testing - Response Maps]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2153._Improving_search_facility_in_Expertiza#Description_about_project CSC/ECE 517 Fall 2021 - E2153. Improving search facility in Expertiza]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2162._Further_refactoring_and_improvement_of_review_mapping_helper CSC/ECE 517 Fall 2021 - E2162. Further refactoring and improvement of review mapping helper]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2163._Refactor_waitlist_functionality CSC/ECE 517 Fall 2021 - E2163.  Refactor waitlist functionality]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2165._Fix_teammate-review_view CSC/ECE 517 Fall 2021 - E2165. Fix teammate review view ]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2155._Calibration_submissions_should_be_copied_along_with_calibration_assignments CSC/ECE 517 Fall 2021 - E2155. Calibration submissions should be copied along with calibration assignments]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2148._Completion/Progress_View CSC/ECE 517 Fall 2021 - E2148. Completion/Progress view]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2158._Grading_audit_trail CSC/ECE 517 Fall 2021 - E2158. Grading audit trail ]&lt;br /&gt;
*[https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2159._Expertiza_internationalization CSC/ECE 517 Fall 2021 - E2159. Expertiza internationalization]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2160._Implementing_and_testing_import_export_controllers#Description_about_project CSC/ECE 517 Fall 2021 - E2160. Implementing and testing import and export controllers]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2164._Heatgrid_fixes_and_improvements#Description_about_project CSC/ECE 517 Fall 2021 - E2164. Heatgrid fixes and improvements]&lt;br /&gt;
* [https://expertiza.csc.ncsu.edu/index.php/CSC/ECE_517_Fall_2021_-_E2126._Testing_-_Team_Related_Files#Description_about_project CSC/ECE 517 Fall 2021 - E2126. Testing - Team Related Files]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=140223</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=140223"/>
		<updated>2021-10-27T03:41:19Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==References==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://doi.org/10.1109/FIE.2015.7344292 Pluggable reputation systems for peer review: A web-service approach]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=140220</id>
		<title>CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2168._Testing_-_Reputations&amp;diff=140220"/>
		<updated>2021-10-27T03:22:47Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* E2168. Testing - Reputations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Initial Create of the page.&lt;br /&gt;
&lt;br /&gt;
More update will be added.&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021&amp;diff=140219</id>
		<title>CSC/ECE 517 Fall 2021</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021&amp;diff=140219"/>
		<updated>2021-10-27T03:21:23Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* OSS Projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OSS Projects ==&lt;br /&gt;
&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2117. Refactor questionaires_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2128. Refactor student_quizzes_controller.rb &amp;amp; late_policies_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2129. Refactor auth_controller.rb &amp;amp; password_retrieval_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2132. Add tests cases for review mapping helper.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2134. Write unit tests for admin_controller.rb and institution_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2138. Auto-generate submission directory names based on assignment]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2142. Improve e-mail notifications]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2133. Write tests for popup_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2120. Refactor reputation_web_service_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2139. Remove multiple topics at a time]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2131. Improve assessment360_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2121. Refactor suggestion_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2122. Refactor impersonate_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2123. Refactor sign_up_sheet_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2126. Refactor account_request_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2124. Refactor review_mapping_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2125. Refactor review_mapping_helper.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2127. Refactor teams_controller]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2130. Refactor submitted_content_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2141. OSS project Finklestein: Instructors &amp;amp; Institutions]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2147. Role-based reviewing]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2145. OSS Project Beige]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2146. Introduce a Student View for instructors]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - Refactor Evaluation of SQL Queries]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2135. Email notification to reviewers and instructors]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2168. Testing - Reputations]]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135._Email_notification_to_reviewers_and_instructors&amp;diff=140211</id>
		<title>CSC/ECE 517 Fall 2021 - E2135. Email notification to reviewers and instructors</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135._Email_notification_to_reviewers_and_instructors&amp;diff=140211"/>
		<updated>2021-10-27T03:09:36Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Create the page in the list of OSS Projects of CSC/ECE 517 Fall 2021*/&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based OSS project.&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
==About Expertiza==&lt;br /&gt;
&lt;br /&gt;
[http://expertiza.ncsu.edu/ Expertiza] is an open-source project developed using the Ruby on Rails framework. Expertiza allows the instructor to create new assignments and customize new or existing assignments. The application allows students to submit and peer-review learning objects (articles, code, websites, etc)[1]. Expertiza supports submission across various document types, including the URLs and wiki pages.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; When an assignment or review approaches its deadline on Expertiza, students initially should receive deadline reminder emails at a specific time before the deadline that the instructor has preconfigured. Lack of this functionality sometimes results in students missing their assignment submission deadlines and thus losing marks. Students should receive this type of deadline reminder email. So this amendment to the project involves adding an asynchronous deadline reminder mailer to the application. &lt;br /&gt;
&lt;br /&gt;
==Modified Files==&lt;br /&gt;
* app/models/due_date.rb&lt;br /&gt;
* test/models/due_date.rb&lt;br /&gt;
* db/migrate/20210319212323_create_delayed_jobs.rb&lt;br /&gt;
&lt;br /&gt;
==Pre-config==&lt;br /&gt;
&lt;br /&gt;
We use delayed jobs library to deal with the delayed email. &lt;br /&gt;
We need run following command in order to install '''delayed_job''' binary executable.&lt;br /&gt;
&lt;br /&gt;
    rails generate delayed_job&lt;br /&gt;
&lt;br /&gt;
Then we need to run&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job start&lt;br /&gt;
&lt;br /&gt;
This will start a backend server to deal with the delayed jobs.&lt;br /&gt;
If we want to stop, just run following.&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job stop&lt;br /&gt;
&lt;br /&gt;
==Implementation approach==&lt;br /&gt;
&lt;br /&gt;
'''1) Reminder email sent when assignment or review is approaching deadline:'''&lt;br /&gt;
In the '''due_date.rb''' file, whenever a new due date is created or an existing due date is updated, the '''start_reminder''' method will be fired which will eventually be added to the delayed_job queue. This job will be executed at a preconfigured time before the deadline, where it will fire the method '''reminder''' which will be added to the delayed job queue by the handle_asynchronously method of gem '''delayed_job_active_record'''. Inside the reminder method, we will fetch three attributes - '''''assignment_id''''', '''''deadline_type''''', '''''due_at'''''. These three attributes will be used to decide the deadline type ( submission or review or teammate review ), fetch the participant email for that assignment, fetch the deadline threshold and at the end send the email reminder at a specified threshold time before the deadline which will contain all the details such as assignment names link to assignment and assignment type ( submission or review or teammate review). &lt;br /&gt;
&lt;br /&gt;
'''2) Implement code:''' &lt;br /&gt;
&lt;br /&gt;
  def create_mailer_object&lt;br /&gt;
    Mailer.new&lt;br /&gt;
  end&lt;br /&gt;
  def create_mailworker_object&lt;br /&gt;
    MailWorker.new(self.parent_id, self.deadline_type, self.due_at)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # main function to start email reminder&lt;br /&gt;
  def start_reminder&lt;br /&gt;
    puts when_to_run_reminder&lt;br /&gt;
    if self.changed?&lt;br /&gt;
      @extra_param = self.parent_id.to_s + &amp;quot;,&amp;quot; + self.deadline_type_id.to_s&lt;br /&gt;
      # first deleted existed delayed jobs with same parent_id(which is assignment id actually)&lt;br /&gt;
      Delayed::Job.where(extra_param: @extra_param).each do |job|&lt;br /&gt;
        job.delete&lt;br /&gt;
      end&lt;br /&gt;
      # add a delayed job to the delayed job queue, the job will run at what when_to_run_reminder return&lt;br /&gt;
      run_at_time = when_to_run_reminder&lt;br /&gt;
      if run_at_time &amp;gt;= 0.seconds.from_now&lt;br /&gt;
        self.delay(run_at: run_at_time, :extra_param =&amp;gt; @extra_param).reminder&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def reminder&lt;br /&gt;
    deadline_text = self.deadline_type if %w[submission review].include? self.deadline_type&lt;br /&gt;
    deadline_text = &amp;quot;Team Review&amp;quot; if self.deadline_type == 'metareview'&lt;br /&gt;
    mail_worker = create_mailworker_object&lt;br /&gt;
    email_reminder(mail_worker.find_participant_emails, deadline_text) unless mail_worker.find_participant_emails.empty?&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.parent_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
    Mailer.delayed_message(bcc: emails, subject: subject, body: body).deliver_now&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # after duedate - threshold hours, then we can send the reminder email&lt;br /&gt;
  def when_to_run_reminder&lt;br /&gt;
    hours_before_deadline = self.threshold.hours&lt;br /&gt;
    result = (self.due_at.in_time_zone - hours_before_deadline).to_dateti&lt;br /&gt;
&lt;br /&gt;
==Automated Testing using RSPEC==&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the mail is enqueued upon the firing of the reminder method.&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the call for reminder method is enqueued upon the firing of the '''start_reminder''' method, with the proper scheduled execution time.&lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
'''NOTE''': All the reminder mails except the ones for the reviewer are sent to '''''expertiza.development@gmail.com''''' ,as this is already set in the development environment.&lt;br /&gt;
&lt;br /&gt;
==Steps to verify Functionality==&lt;br /&gt;
&lt;br /&gt;
*Test Email&lt;br /&gt;
 Email: expertiza_test123@outlook.com&lt;br /&gt;
 Password: password98@&lt;br /&gt;
&lt;br /&gt;
*Test Expertiza account&lt;br /&gt;
 name/login: Email_Test_ID1&lt;br /&gt;
 password: password98@&lt;br /&gt;
&lt;br /&gt;
1. Change the existing assignment's due date&lt;br /&gt;
First login in as instructor6, and go to Manage → Assignments&lt;br /&gt;
&lt;br /&gt;
  [[File:manage_page.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then click change edit in the right side menu under **Actions**.&lt;br /&gt;
Go to Due dates and change the **reminder** and **Due &amp;amp; Time**, and click save.&lt;br /&gt;
&lt;br /&gt;
  [[File:edit_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then you should receive email at expertiza_test123@outlook.com in the appropriate time. For this example,16 hours before 2021/10/25 04:08 (US Eastern time zone), the system shall send a reminder email to expertiza_test123@outlook.com.&lt;br /&gt;
&lt;br /&gt;
2. Create a new assignment with valid due date&lt;br /&gt;
Create a new Assignment&lt;br /&gt;
&lt;br /&gt;
  [[File:create_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Assign a valid due date and reminder hours&lt;br /&gt;
&lt;br /&gt;
  [[File:new_date.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then add the previous user login to it as a participant&lt;br /&gt;
&lt;br /&gt;
  [[File:add_participate.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Because the participants was added later, so we need to change the due date once more in order to trigger the email notification jobs.&lt;br /&gt;
&lt;br /&gt;
==Additional Links==&lt;br /&gt;
&lt;br /&gt;
*Git forked repository link: https://github.com/CuiJinku/expertiza/tree/beta&lt;br /&gt;
*VCL deployment: http://152.7.176.117:8080/&lt;br /&gt;
*Pull request link: https://github.com/expertiza/expertiza/pull/2099&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[https://github.com/CuiJinku/expertiza GitHub Project Repository Fork]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://relishapp.com/rspec Rspec Documentation]&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
[mailto:dli35@ncsu.edu Dong Li]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:ldu2@ncsu.edu Liwen Du]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:jcui23@ncsu.edu Jinku Cui]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021&amp;diff=140210</id>
		<title>CSC/ECE 517 Fall 2021</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021&amp;diff=140210"/>
		<updated>2021-10-27T03:08:29Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* OSS Projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OSS Projects ==&lt;br /&gt;
&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2117. Refactor questionaires_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2128. Refactor student_quizzes_controller.rb &amp;amp; late_policies_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2129. Refactor auth_controller.rb &amp;amp; password_retrieval_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2132. Add tests cases for review mapping helper.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2134. Write unit tests for admin_controller.rb and institution_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2138. Auto-generate submission directory names based on assignment]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2142. Improve e-mail notifications]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2133. Write tests for popup_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2120. Refactor reputation_web_service_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2139. Remove multiple topics at a time]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2131. Improve assessment360_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2121. Refactor suggestion_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2122. Refactor impersonate_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2123. Refactor sign_up_sheet_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2126. Refactor account_request_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2124. Refactor review_mapping_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2125. Refactor review_mapping_helper.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2127. Refactor teams_controller]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2130. Refactor submitted_content_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2141. OSS project Finklestein: Instructors &amp;amp; Institutions]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2147. Role-based reviewing]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2145. OSS Project Beige]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2146. Introduce a Student View for instructors]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - Refactor Evaluation of SQL Queries]]&lt;br /&gt;
* [[CSC/ECE 517 Fall 2021 - E2135. Email notification to reviewers and instructors]]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140205</id>
		<title>CSC/ECE 517 Fall 2021 - E2135 Email notification to reviewers and instructors</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140205"/>
		<updated>2021-10-27T02:48:22Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Pre-config */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based OSS project.&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
==About Expertiza==&lt;br /&gt;
&lt;br /&gt;
[http://expertiza.ncsu.edu/ Expertiza] is an open-source project developed using the Ruby on Rails framework. Expertiza allows the instructor to create new assignments and customize new or existing assignments. The application allows students to submit and peer-review learning objects (articles, code, websites, etc)[1]. Expertiza supports submission across various document types, including the URLs and wiki pages.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; When an assignment or review approaches its deadline on Expertiza, students initially should receive deadline reminder emails at a specific time before the deadline that the instructor has preconfigured. Lack of this functionality sometimes results in students missing their assignment submission deadlines and thus losing marks. Students should receive this type of deadline reminder email. So this amendment to the project involves adding an asynchronous deadline reminder mailer to the application. &lt;br /&gt;
&lt;br /&gt;
==Modified Files==&lt;br /&gt;
* app/models/due_date.rb&lt;br /&gt;
* test/models/due_date.rb&lt;br /&gt;
* db/migrate/20210319212323_create_delayed_jobs.rb&lt;br /&gt;
&lt;br /&gt;
==Pre-config==&lt;br /&gt;
&lt;br /&gt;
We use delayed jobs library to deal with the delayed email. &lt;br /&gt;
We need run following command in order to install '''delayed_job''' binary executable.&lt;br /&gt;
&lt;br /&gt;
    rails generate delayed_job&lt;br /&gt;
&lt;br /&gt;
Then we need to run&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job start&lt;br /&gt;
&lt;br /&gt;
This will start a backend server to deal with the delayed jobs.&lt;br /&gt;
If we want to stop, just run following.&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job stop&lt;br /&gt;
&lt;br /&gt;
==Implementation approach==&lt;br /&gt;
&lt;br /&gt;
'''1) Reminder email sent when assignment or review is approaching deadline:'''&lt;br /&gt;
In the '''due_date.rb''' file, whenever a new due date is created or an existing due date is updated, the '''start_reminder''' method will be fired which will eventually be added to the delayed_job queue. This job will be executed at a preconfigured time before the deadline, where it will fire the method '''reminder''' which will be added to the delayed job queue by the handle_asynchronously method of gem '''delayed_job_active_record'''. Inside the reminder method, we will fetch three attributes - '''''assignment_id''''', '''''deadline_type''''', '''''due_at'''''. These three attributes will be used to decide the deadline type ( submission or review or teammate review ), fetch the participant email for that assignment, fetch the deadline threshold and at the end send the email reminder at a specified threshold time before the deadline which will contain all the details such as assignment names link to assignment and assignment type ( submission or review or teammate review). &lt;br /&gt;
&lt;br /&gt;
'''2) Implement code:''' &lt;br /&gt;
&lt;br /&gt;
  def create_mailer_object&lt;br /&gt;
    Mailer.new&lt;br /&gt;
  end&lt;br /&gt;
  def create_mailworker_object&lt;br /&gt;
    MailWorker.new(self.parent_id, self.deadline_type, self.due_at)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # main function to start email reminder&lt;br /&gt;
  def start_reminder&lt;br /&gt;
    puts when_to_run_reminder&lt;br /&gt;
    if self.changed?&lt;br /&gt;
      @extra_param = self.parent_id.to_s + &amp;quot;,&amp;quot; + self.deadline_type_id.to_s&lt;br /&gt;
      # first deleted existed delayed jobs with same parent_id(which is assignment id actually)&lt;br /&gt;
      Delayed::Job.where(extra_param: @extra_param).each do |job|&lt;br /&gt;
        job.delete&lt;br /&gt;
      end&lt;br /&gt;
      # add a delayed job to the delayed job queue, the job will run at what when_to_run_reminder return&lt;br /&gt;
      run_at_time = when_to_run_reminder&lt;br /&gt;
      if run_at_time &amp;gt;= 0.seconds.from_now&lt;br /&gt;
        self.delay(run_at: run_at_time, :extra_param =&amp;gt; @extra_param).reminder&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def reminder&lt;br /&gt;
    deadline_text = self.deadline_type if %w[submission review].include? self.deadline_type&lt;br /&gt;
    deadline_text = &amp;quot;Team Review&amp;quot; if self.deadline_type == 'metareview'&lt;br /&gt;
    mail_worker = create_mailworker_object&lt;br /&gt;
    email_reminder(mail_worker.find_participant_emails, deadline_text) unless mail_worker.find_participant_emails.empty?&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.parent_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
    Mailer.delayed_message(bcc: emails, subject: subject, body: body).deliver_now&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # after duedate - threshold hours, then we can send the reminder email&lt;br /&gt;
  def when_to_run_reminder&lt;br /&gt;
    hours_before_deadline = self.threshold.hours&lt;br /&gt;
    result = (self.due_at.in_time_zone - hours_before_deadline).to_dateti&lt;br /&gt;
&lt;br /&gt;
==Automated Testing using RSPEC==&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the mail is enqueued upon the firing of the reminder method.&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the call for reminder method is enqueued upon the firing of the '''start_reminder''' method, with the proper scheduled execution time.&lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
'''NOTE''': All the reminder mails except the ones for the reviewer are sent to '''''expertiza.development@gmail.com''''' ,as this is already set in the development environment.&lt;br /&gt;
&lt;br /&gt;
==Steps to verify Functionality==&lt;br /&gt;
&lt;br /&gt;
*Test Email&lt;br /&gt;
 Email: expertiza_test123@outlook.com&lt;br /&gt;
 Password: password98@&lt;br /&gt;
&lt;br /&gt;
*Test Expertiza account&lt;br /&gt;
 name/login: Email_Test_ID1&lt;br /&gt;
 password: password98@&lt;br /&gt;
&lt;br /&gt;
1. Change the existing assignment's due date&lt;br /&gt;
First login in as instructor6, and go to Manage → Assignments&lt;br /&gt;
&lt;br /&gt;
  [[File:manage_page.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then click change edit in the right side menu under **Actions**.&lt;br /&gt;
Go to Due dates and change the **reminder** and **Due &amp;amp; Time**, and click save.&lt;br /&gt;
&lt;br /&gt;
  [[File:edit_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then you should receive email at expertiza_test123@outlook.com in the appropriate time. For this example,16 hours before 2021/10/25 04:08 (US Eastern time zone), the system shall send a reminder email to expertiza_test123@outlook.com.&lt;br /&gt;
&lt;br /&gt;
2. Create a new assignment with valid due date&lt;br /&gt;
Create a new Assignment&lt;br /&gt;
&lt;br /&gt;
  [[File:create_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Assign a valid due date and reminder hours&lt;br /&gt;
&lt;br /&gt;
  [[File:new_date.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then add the previous user login to it as a participant&lt;br /&gt;
&lt;br /&gt;
  [[File:add_participate.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Because the participants was added later, so we need to change the due date once more in order to trigger the email notification jobs.&lt;br /&gt;
&lt;br /&gt;
==Additional Links==&lt;br /&gt;
&lt;br /&gt;
*Git forked repository link: https://github.com/CuiJinku/expertiza/tree/beta&lt;br /&gt;
*VCL deployment: http://152.7.176.117:8080/&lt;br /&gt;
*Pull request link: https://github.com/expertiza/expertiza/pull/2099&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[https://github.com/CuiJinku/expertiza GitHub Project Repository Fork]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://relishapp.com/rspec Rspec Documentation]&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
[mailto:dli35@ncsu.edu Dong Li]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:ldu2@ncsu.edu Liwen Du]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:jcui23@ncsu.edu Jinku Cui]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140204</id>
		<title>CSC/ECE 517 Fall 2021 - E2135 Email notification to reviewers and instructors</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140204"/>
		<updated>2021-10-27T02:47:50Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Additional Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based OSS project.&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
==About Expertiza==&lt;br /&gt;
&lt;br /&gt;
[http://expertiza.ncsu.edu/ Expertiza] is an open-source project developed using the Ruby on Rails framework. Expertiza allows the instructor to create new assignments and customize new or existing assignments. The application allows students to submit and peer-review learning objects (articles, code, websites, etc)[1]. Expertiza supports submission across various document types, including the URLs and wiki pages.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; When an assignment or review approaches its deadline on Expertiza, students initially should receive deadline reminder emails at a specific time before the deadline that the instructor has preconfigured. Lack of this functionality sometimes results in students missing their assignment submission deadlines and thus losing marks. Students should receive this type of deadline reminder email. So this amendment to the project involves adding an asynchronous deadline reminder mailer to the application. &lt;br /&gt;
&lt;br /&gt;
==Modified Files==&lt;br /&gt;
* app/models/due_date.rb&lt;br /&gt;
* test/models/due_date.rb&lt;br /&gt;
* db/migrate/20210319212323_create_delayed_jobs.rb&lt;br /&gt;
&lt;br /&gt;
==Pre-config==&lt;br /&gt;
&lt;br /&gt;
We use delayed jobs library to deal with the delayed email. &lt;br /&gt;
We need run following command in order to install delayed_job binary executable.&lt;br /&gt;
&lt;br /&gt;
    rails generate delayed_job&lt;br /&gt;
&lt;br /&gt;
Then we need to run&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job start&lt;br /&gt;
&lt;br /&gt;
This will start a backend server to deal with the delayed jobs.&lt;br /&gt;
If we want to stop, just run following.&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Implementation approach==&lt;br /&gt;
&lt;br /&gt;
'''1) Reminder email sent when assignment or review is approaching deadline:'''&lt;br /&gt;
In the '''due_date.rb''' file, whenever a new due date is created or an existing due date is updated, the '''start_reminder''' method will be fired which will eventually be added to the delayed_job queue. This job will be executed at a preconfigured time before the deadline, where it will fire the method '''reminder''' which will be added to the delayed job queue by the handle_asynchronously method of gem '''delayed_job_active_record'''. Inside the reminder method, we will fetch three attributes - '''''assignment_id''''', '''''deadline_type''''', '''''due_at'''''. These three attributes will be used to decide the deadline type ( submission or review or teammate review ), fetch the participant email for that assignment, fetch the deadline threshold and at the end send the email reminder at a specified threshold time before the deadline which will contain all the details such as assignment names link to assignment and assignment type ( submission or review or teammate review). &lt;br /&gt;
&lt;br /&gt;
'''2) Implement code:''' &lt;br /&gt;
&lt;br /&gt;
  def create_mailer_object&lt;br /&gt;
    Mailer.new&lt;br /&gt;
  end&lt;br /&gt;
  def create_mailworker_object&lt;br /&gt;
    MailWorker.new(self.parent_id, self.deadline_type, self.due_at)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # main function to start email reminder&lt;br /&gt;
  def start_reminder&lt;br /&gt;
    puts when_to_run_reminder&lt;br /&gt;
    if self.changed?&lt;br /&gt;
      @extra_param = self.parent_id.to_s + &amp;quot;,&amp;quot; + self.deadline_type_id.to_s&lt;br /&gt;
      # first deleted existed delayed jobs with same parent_id(which is assignment id actually)&lt;br /&gt;
      Delayed::Job.where(extra_param: @extra_param).each do |job|&lt;br /&gt;
        job.delete&lt;br /&gt;
      end&lt;br /&gt;
      # add a delayed job to the delayed job queue, the job will run at what when_to_run_reminder return&lt;br /&gt;
      run_at_time = when_to_run_reminder&lt;br /&gt;
      if run_at_time &amp;gt;= 0.seconds.from_now&lt;br /&gt;
        self.delay(run_at: run_at_time, :extra_param =&amp;gt; @extra_param).reminder&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def reminder&lt;br /&gt;
    deadline_text = self.deadline_type if %w[submission review].include? self.deadline_type&lt;br /&gt;
    deadline_text = &amp;quot;Team Review&amp;quot; if self.deadline_type == 'metareview'&lt;br /&gt;
    mail_worker = create_mailworker_object&lt;br /&gt;
    email_reminder(mail_worker.find_participant_emails, deadline_text) unless mail_worker.find_participant_emails.empty?&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.parent_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
    Mailer.delayed_message(bcc: emails, subject: subject, body: body).deliver_now&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # after duedate - threshold hours, then we can send the reminder email&lt;br /&gt;
  def when_to_run_reminder&lt;br /&gt;
    hours_before_deadline = self.threshold.hours&lt;br /&gt;
    result = (self.due_at.in_time_zone - hours_before_deadline).to_dateti&lt;br /&gt;
&lt;br /&gt;
==Automated Testing using RSPEC==&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the mail is enqueued upon the firing of the reminder method.&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the call for reminder method is enqueued upon the firing of the '''start_reminder''' method, with the proper scheduled execution time.&lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
'''NOTE''': All the reminder mails except the ones for the reviewer are sent to '''''expertiza.development@gmail.com''''' ,as this is already set in the development environment.&lt;br /&gt;
&lt;br /&gt;
==Steps to verify Functionality==&lt;br /&gt;
&lt;br /&gt;
*Test Email&lt;br /&gt;
 Email: expertiza_test123@outlook.com&lt;br /&gt;
 Password: password98@&lt;br /&gt;
&lt;br /&gt;
*Test Expertiza account&lt;br /&gt;
 name/login: Email_Test_ID1&lt;br /&gt;
 password: password98@&lt;br /&gt;
&lt;br /&gt;
1. Change the existing assignment's due date&lt;br /&gt;
First login in as instructor6, and go to Manage → Assignments&lt;br /&gt;
&lt;br /&gt;
  [[File:manage_page.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then click change edit in the right side menu under **Actions**.&lt;br /&gt;
Go to Due dates and change the **reminder** and **Due &amp;amp; Time**, and click save.&lt;br /&gt;
&lt;br /&gt;
  [[File:edit_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then you should receive email at expertiza_test123@outlook.com in the appropriate time. For this example,16 hours before 2021/10/25 04:08 (US Eastern time zone), the system shall send a reminder email to expertiza_test123@outlook.com.&lt;br /&gt;
&lt;br /&gt;
2. Create a new assignment with valid due date&lt;br /&gt;
Create a new Assignment&lt;br /&gt;
&lt;br /&gt;
  [[File:create_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Assign a valid due date and reminder hours&lt;br /&gt;
&lt;br /&gt;
  [[File:new_date.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then add the previous user login to it as a participant&lt;br /&gt;
&lt;br /&gt;
  [[File:add_participate.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Because the participants was added later, so we need to change the due date once more in order to trigger the email notification jobs.&lt;br /&gt;
&lt;br /&gt;
==Additional Links==&lt;br /&gt;
&lt;br /&gt;
*Git forked repository link: https://github.com/CuiJinku/expertiza/tree/beta&lt;br /&gt;
*VCL deployment: http://152.7.176.117:8080/&lt;br /&gt;
*Pull request link: https://github.com/expertiza/expertiza/pull/2099&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[https://github.com/CuiJinku/expertiza GitHub Project Repository Fork]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://relishapp.com/rspec Rspec Documentation]&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
[mailto:dli35@ncsu.edu Dong Li]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:ldu2@ncsu.edu Liwen Du]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:jcui23@ncsu.edu Jinku Cui]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140203</id>
		<title>CSC/ECE 517 Fall 2021 - E2135 Email notification to reviewers and instructors</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140203"/>
		<updated>2021-10-27T02:46:51Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Automated Testing using RSPEC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based OSS project.&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
==About Expertiza==&lt;br /&gt;
&lt;br /&gt;
[http://expertiza.ncsu.edu/ Expertiza] is an open-source project developed using the Ruby on Rails framework. Expertiza allows the instructor to create new assignments and customize new or existing assignments. The application allows students to submit and peer-review learning objects (articles, code, websites, etc)[1]. Expertiza supports submission across various document types, including the URLs and wiki pages.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; When an assignment or review approaches its deadline on Expertiza, students initially should receive deadline reminder emails at a specific time before the deadline that the instructor has preconfigured. Lack of this functionality sometimes results in students missing their assignment submission deadlines and thus losing marks. Students should receive this type of deadline reminder email. So this amendment to the project involves adding an asynchronous deadline reminder mailer to the application. &lt;br /&gt;
&lt;br /&gt;
==Modified Files==&lt;br /&gt;
* app/models/due_date.rb&lt;br /&gt;
* test/models/due_date.rb&lt;br /&gt;
* db/migrate/20210319212323_create_delayed_jobs.rb&lt;br /&gt;
&lt;br /&gt;
==Pre-config==&lt;br /&gt;
&lt;br /&gt;
We use delayed jobs library to deal with the delayed email. &lt;br /&gt;
We need run following command in order to install delayed_job binary executable.&lt;br /&gt;
&lt;br /&gt;
    rails generate delayed_job&lt;br /&gt;
&lt;br /&gt;
Then we need to run&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job start&lt;br /&gt;
&lt;br /&gt;
This will start a backend server to deal with the delayed jobs.&lt;br /&gt;
If we want to stop, just run following.&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Implementation approach==&lt;br /&gt;
&lt;br /&gt;
'''1) Reminder email sent when assignment or review is approaching deadline:'''&lt;br /&gt;
In the '''due_date.rb''' file, whenever a new due date is created or an existing due date is updated, the '''start_reminder''' method will be fired which will eventually be added to the delayed_job queue. This job will be executed at a preconfigured time before the deadline, where it will fire the method '''reminder''' which will be added to the delayed job queue by the handle_asynchronously method of gem '''delayed_job_active_record'''. Inside the reminder method, we will fetch three attributes - '''''assignment_id''''', '''''deadline_type''''', '''''due_at'''''. These three attributes will be used to decide the deadline type ( submission or review or teammate review ), fetch the participant email for that assignment, fetch the deadline threshold and at the end send the email reminder at a specified threshold time before the deadline which will contain all the details such as assignment names link to assignment and assignment type ( submission or review or teammate review). &lt;br /&gt;
&lt;br /&gt;
'''2) Implement code:''' &lt;br /&gt;
&lt;br /&gt;
  def create_mailer_object&lt;br /&gt;
    Mailer.new&lt;br /&gt;
  end&lt;br /&gt;
  def create_mailworker_object&lt;br /&gt;
    MailWorker.new(self.parent_id, self.deadline_type, self.due_at)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # main function to start email reminder&lt;br /&gt;
  def start_reminder&lt;br /&gt;
    puts when_to_run_reminder&lt;br /&gt;
    if self.changed?&lt;br /&gt;
      @extra_param = self.parent_id.to_s + &amp;quot;,&amp;quot; + self.deadline_type_id.to_s&lt;br /&gt;
      # first deleted existed delayed jobs with same parent_id(which is assignment id actually)&lt;br /&gt;
      Delayed::Job.where(extra_param: @extra_param).each do |job|&lt;br /&gt;
        job.delete&lt;br /&gt;
      end&lt;br /&gt;
      # add a delayed job to the delayed job queue, the job will run at what when_to_run_reminder return&lt;br /&gt;
      run_at_time = when_to_run_reminder&lt;br /&gt;
      if run_at_time &amp;gt;= 0.seconds.from_now&lt;br /&gt;
        self.delay(run_at: run_at_time, :extra_param =&amp;gt; @extra_param).reminder&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def reminder&lt;br /&gt;
    deadline_text = self.deadline_type if %w[submission review].include? self.deadline_type&lt;br /&gt;
    deadline_text = &amp;quot;Team Review&amp;quot; if self.deadline_type == 'metareview'&lt;br /&gt;
    mail_worker = create_mailworker_object&lt;br /&gt;
    email_reminder(mail_worker.find_participant_emails, deadline_text) unless mail_worker.find_participant_emails.empty?&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.parent_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
    Mailer.delayed_message(bcc: emails, subject: subject, body: body).deliver_now&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # after duedate - threshold hours, then we can send the reminder email&lt;br /&gt;
  def when_to_run_reminder&lt;br /&gt;
    hours_before_deadline = self.threshold.hours&lt;br /&gt;
    result = (self.due_at.in_time_zone - hours_before_deadline).to_dateti&lt;br /&gt;
&lt;br /&gt;
==Automated Testing using RSPEC==&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the mail is enqueued upon the firing of the reminder method.&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the call for reminder method is enqueued upon the firing of the '''start_reminder''' method, with the proper scheduled execution time.&lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
'''NOTE''': All the reminder mails except the ones for the reviewer are sent to '''''expertiza.development@gmail.com''''' ,as this is already set in the development environment.&lt;br /&gt;
&lt;br /&gt;
==Steps to verify Functionality==&lt;br /&gt;
&lt;br /&gt;
*Test Email&lt;br /&gt;
 Email: expertiza_test123@outlook.com&lt;br /&gt;
 Password: password98@&lt;br /&gt;
&lt;br /&gt;
*Test Expertiza account&lt;br /&gt;
 name/login: Email_Test_ID1&lt;br /&gt;
 password: password98@&lt;br /&gt;
&lt;br /&gt;
1. Change the existing assignment's due date&lt;br /&gt;
First login in as instructor6, and go to Manage → Assignments&lt;br /&gt;
&lt;br /&gt;
  [[File:manage_page.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then click change edit in the right side menu under **Actions**.&lt;br /&gt;
Go to Due dates and change the **reminder** and **Due &amp;amp; Time**, and click save.&lt;br /&gt;
&lt;br /&gt;
  [[File:edit_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then you should receive email at expertiza_test123@outlook.com in the appropriate time. For this example,16 hours before 2021/10/25 04:08 (US Eastern time zone), the system shall send a reminder email to expertiza_test123@outlook.com.&lt;br /&gt;
&lt;br /&gt;
2. Create a new assignment with valid due date&lt;br /&gt;
Create a new Assignment&lt;br /&gt;
&lt;br /&gt;
  [[File:create_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Assign a valid due date and reminder hours&lt;br /&gt;
&lt;br /&gt;
  [[File:new_date.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then add the previous user login to it as a participant&lt;br /&gt;
&lt;br /&gt;
  [[File:add_participate.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Because the participants was added later, so we need to change the due date once more in order to trigger the email notification jobs.&lt;br /&gt;
&lt;br /&gt;
==Additional Links==&lt;br /&gt;
&lt;br /&gt;
*Git pull link: https://github.com/CuiJinku/expertiza/tree/beta&lt;br /&gt;
*VCL deployment: http://152.7.176.117:8080/&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[https://github.com/CuiJinku/expertiza GitHub Project Repository Fork]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://relishapp.com/rspec Rspec Documentation]&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
[mailto:dli35@ncsu.edu Dong Li]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:ldu2@ncsu.edu Liwen Du]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:jcui23@ncsu.edu Jinku Cui]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140202</id>
		<title>CSC/ECE 517 Fall 2021 - E2135 Email notification to reviewers and instructors</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140202"/>
		<updated>2021-10-27T02:46:20Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Automated Testing using RSPEC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based OSS project.&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
==About Expertiza==&lt;br /&gt;
&lt;br /&gt;
[http://expertiza.ncsu.edu/ Expertiza] is an open-source project developed using the Ruby on Rails framework. Expertiza allows the instructor to create new assignments and customize new or existing assignments. The application allows students to submit and peer-review learning objects (articles, code, websites, etc)[1]. Expertiza supports submission across various document types, including the URLs and wiki pages.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; When an assignment or review approaches its deadline on Expertiza, students initially should receive deadline reminder emails at a specific time before the deadline that the instructor has preconfigured. Lack of this functionality sometimes results in students missing their assignment submission deadlines and thus losing marks. Students should receive this type of deadline reminder email. So this amendment to the project involves adding an asynchronous deadline reminder mailer to the application. &lt;br /&gt;
&lt;br /&gt;
==Modified Files==&lt;br /&gt;
* app/models/due_date.rb&lt;br /&gt;
* test/models/due_date.rb&lt;br /&gt;
* db/migrate/20210319212323_create_delayed_jobs.rb&lt;br /&gt;
&lt;br /&gt;
==Pre-config==&lt;br /&gt;
&lt;br /&gt;
We use delayed jobs library to deal with the delayed email. &lt;br /&gt;
We need run following command in order to install delayed_job binary executable.&lt;br /&gt;
&lt;br /&gt;
    rails generate delayed_job&lt;br /&gt;
&lt;br /&gt;
Then we need to run&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job start&lt;br /&gt;
&lt;br /&gt;
This will start a backend server to deal with the delayed jobs.&lt;br /&gt;
If we want to stop, just run following.&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Implementation approach==&lt;br /&gt;
&lt;br /&gt;
'''1) Reminder email sent when assignment or review is approaching deadline:'''&lt;br /&gt;
In the '''due_date.rb''' file, whenever a new due date is created or an existing due date is updated, the '''start_reminder''' method will be fired which will eventually be added to the delayed_job queue. This job will be executed at a preconfigured time before the deadline, where it will fire the method '''reminder''' which will be added to the delayed job queue by the handle_asynchronously method of gem '''delayed_job_active_record'''. Inside the reminder method, we will fetch three attributes - '''''assignment_id''''', '''''deadline_type''''', '''''due_at'''''. These three attributes will be used to decide the deadline type ( submission or review or teammate review ), fetch the participant email for that assignment, fetch the deadline threshold and at the end send the email reminder at a specified threshold time before the deadline which will contain all the details such as assignment names link to assignment and assignment type ( submission or review or teammate review). &lt;br /&gt;
&lt;br /&gt;
'''2) Implement code:''' &lt;br /&gt;
&lt;br /&gt;
  def create_mailer_object&lt;br /&gt;
    Mailer.new&lt;br /&gt;
  end&lt;br /&gt;
  def create_mailworker_object&lt;br /&gt;
    MailWorker.new(self.parent_id, self.deadline_type, self.due_at)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # main function to start email reminder&lt;br /&gt;
  def start_reminder&lt;br /&gt;
    puts when_to_run_reminder&lt;br /&gt;
    if self.changed?&lt;br /&gt;
      @extra_param = self.parent_id.to_s + &amp;quot;,&amp;quot; + self.deadline_type_id.to_s&lt;br /&gt;
      # first deleted existed delayed jobs with same parent_id(which is assignment id actually)&lt;br /&gt;
      Delayed::Job.where(extra_param: @extra_param).each do |job|&lt;br /&gt;
        job.delete&lt;br /&gt;
      end&lt;br /&gt;
      # add a delayed job to the delayed job queue, the job will run at what when_to_run_reminder return&lt;br /&gt;
      run_at_time = when_to_run_reminder&lt;br /&gt;
      if run_at_time &amp;gt;= 0.seconds.from_now&lt;br /&gt;
        self.delay(run_at: run_at_time, :extra_param =&amp;gt; @extra_param).reminder&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def reminder&lt;br /&gt;
    deadline_text = self.deadline_type if %w[submission review].include? self.deadline_type&lt;br /&gt;
    deadline_text = &amp;quot;Team Review&amp;quot; if self.deadline_type == 'metareview'&lt;br /&gt;
    mail_worker = create_mailworker_object&lt;br /&gt;
    email_reminder(mail_worker.find_participant_emails, deadline_text) unless mail_worker.find_participant_emails.empty?&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.parent_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
    Mailer.delayed_message(bcc: emails, subject: subject, body: body).deliver_now&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # after duedate - threshold hours, then we can send the reminder email&lt;br /&gt;
  def when_to_run_reminder&lt;br /&gt;
    hours_before_deadline = self.threshold.hours&lt;br /&gt;
    result = (self.due_at.in_time_zone - hours_before_deadline).to_dateti&lt;br /&gt;
&lt;br /&gt;
==Automated Testing using RSPEC==&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the mail is enqueued upon the firing of the reminder method.&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the call for reminder method is enqueued upon the firing of the '''start_reminder''' method, with the proper scheduled execution time.&lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
NOTE: All the reminder mails except the ones for the reviewer are sent to expertiza.development@gmail.com ,as this is already set in the development environment.&lt;br /&gt;
&lt;br /&gt;
==Steps to verify Functionality==&lt;br /&gt;
&lt;br /&gt;
*Test Email&lt;br /&gt;
 Email: expertiza_test123@outlook.com&lt;br /&gt;
 Password: password98@&lt;br /&gt;
&lt;br /&gt;
*Test Expertiza account&lt;br /&gt;
 name/login: Email_Test_ID1&lt;br /&gt;
 password: password98@&lt;br /&gt;
&lt;br /&gt;
1. Change the existing assignment's due date&lt;br /&gt;
First login in as instructor6, and go to Manage → Assignments&lt;br /&gt;
&lt;br /&gt;
  [[File:manage_page.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then click change edit in the right side menu under **Actions**.&lt;br /&gt;
Go to Due dates and change the **reminder** and **Due &amp;amp; Time**, and click save.&lt;br /&gt;
&lt;br /&gt;
  [[File:edit_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then you should receive email at expertiza_test123@outlook.com in the appropriate time. For this example,16 hours before 2021/10/25 04:08 (US Eastern time zone), the system shall send a reminder email to expertiza_test123@outlook.com.&lt;br /&gt;
&lt;br /&gt;
2. Create a new assignment with valid due date&lt;br /&gt;
Create a new Assignment&lt;br /&gt;
&lt;br /&gt;
  [[File:create_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Assign a valid due date and reminder hours&lt;br /&gt;
&lt;br /&gt;
  [[File:new_date.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then add the previous user login to it as a participant&lt;br /&gt;
&lt;br /&gt;
  [[File:add_participate.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Because the participants was added later, so we need to change the due date once more in order to trigger the email notification jobs.&lt;br /&gt;
&lt;br /&gt;
==Additional Links==&lt;br /&gt;
&lt;br /&gt;
*Git pull link: https://github.com/CuiJinku/expertiza/tree/beta&lt;br /&gt;
*VCL deployment: http://152.7.176.117:8080/&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[https://github.com/CuiJinku/expertiza GitHub Project Repository Fork]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://relishapp.com/rspec Rspec Documentation]&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
[mailto:dli35@ncsu.edu Dong Li]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:ldu2@ncsu.edu Liwen Du]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:jcui23@ncsu.edu Jinku Cui]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140201</id>
		<title>CSC/ECE 517 Fall 2021 - E2135 Email notification to reviewers and instructors</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140201"/>
		<updated>2021-10-27T02:45:42Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Implementation approach */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based OSS project.&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
==About Expertiza==&lt;br /&gt;
&lt;br /&gt;
[http://expertiza.ncsu.edu/ Expertiza] is an open-source project developed using the Ruby on Rails framework. Expertiza allows the instructor to create new assignments and customize new or existing assignments. The application allows students to submit and peer-review learning objects (articles, code, websites, etc)[1]. Expertiza supports submission across various document types, including the URLs and wiki pages.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; When an assignment or review approaches its deadline on Expertiza, students initially should receive deadline reminder emails at a specific time before the deadline that the instructor has preconfigured. Lack of this functionality sometimes results in students missing their assignment submission deadlines and thus losing marks. Students should receive this type of deadline reminder email. So this amendment to the project involves adding an asynchronous deadline reminder mailer to the application. &lt;br /&gt;
&lt;br /&gt;
==Modified Files==&lt;br /&gt;
* app/models/due_date.rb&lt;br /&gt;
* test/models/due_date.rb&lt;br /&gt;
* db/migrate/20210319212323_create_delayed_jobs.rb&lt;br /&gt;
&lt;br /&gt;
==Pre-config==&lt;br /&gt;
&lt;br /&gt;
We use delayed jobs library to deal with the delayed email. &lt;br /&gt;
We need run following command in order to install delayed_job binary executable.&lt;br /&gt;
&lt;br /&gt;
    rails generate delayed_job&lt;br /&gt;
&lt;br /&gt;
Then we need to run&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job start&lt;br /&gt;
&lt;br /&gt;
This will start a backend server to deal with the delayed jobs.&lt;br /&gt;
If we want to stop, just run following.&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Implementation approach==&lt;br /&gt;
&lt;br /&gt;
'''1) Reminder email sent when assignment or review is approaching deadline:'''&lt;br /&gt;
In the '''due_date.rb''' file, whenever a new due date is created or an existing due date is updated, the '''start_reminder''' method will be fired which will eventually be added to the delayed_job queue. This job will be executed at a preconfigured time before the deadline, where it will fire the method '''reminder''' which will be added to the delayed job queue by the handle_asynchronously method of gem '''delayed_job_active_record'''. Inside the reminder method, we will fetch three attributes - '''''assignment_id''''', '''''deadline_type''''', '''''due_at'''''. These three attributes will be used to decide the deadline type ( submission or review or teammate review ), fetch the participant email for that assignment, fetch the deadline threshold and at the end send the email reminder at a specified threshold time before the deadline which will contain all the details such as assignment names link to assignment and assignment type ( submission or review or teammate review). &lt;br /&gt;
&lt;br /&gt;
'''2) Implement code:''' &lt;br /&gt;
&lt;br /&gt;
  def create_mailer_object&lt;br /&gt;
    Mailer.new&lt;br /&gt;
  end&lt;br /&gt;
  def create_mailworker_object&lt;br /&gt;
    MailWorker.new(self.parent_id, self.deadline_type, self.due_at)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # main function to start email reminder&lt;br /&gt;
  def start_reminder&lt;br /&gt;
    puts when_to_run_reminder&lt;br /&gt;
    if self.changed?&lt;br /&gt;
      @extra_param = self.parent_id.to_s + &amp;quot;,&amp;quot; + self.deadline_type_id.to_s&lt;br /&gt;
      # first deleted existed delayed jobs with same parent_id(which is assignment id actually)&lt;br /&gt;
      Delayed::Job.where(extra_param: @extra_param).each do |job|&lt;br /&gt;
        job.delete&lt;br /&gt;
      end&lt;br /&gt;
      # add a delayed job to the delayed job queue, the job will run at what when_to_run_reminder return&lt;br /&gt;
      run_at_time = when_to_run_reminder&lt;br /&gt;
      if run_at_time &amp;gt;= 0.seconds.from_now&lt;br /&gt;
        self.delay(run_at: run_at_time, :extra_param =&amp;gt; @extra_param).reminder&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def reminder&lt;br /&gt;
    deadline_text = self.deadline_type if %w[submission review].include? self.deadline_type&lt;br /&gt;
    deadline_text = &amp;quot;Team Review&amp;quot; if self.deadline_type == 'metareview'&lt;br /&gt;
    mail_worker = create_mailworker_object&lt;br /&gt;
    email_reminder(mail_worker.find_participant_emails, deadline_text) unless mail_worker.find_participant_emails.empty?&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.parent_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
    Mailer.delayed_message(bcc: emails, subject: subject, body: body).deliver_now&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # after duedate - threshold hours, then we can send the reminder email&lt;br /&gt;
  def when_to_run_reminder&lt;br /&gt;
    hours_before_deadline = self.threshold.hours&lt;br /&gt;
    result = (self.due_at.in_time_zone - hours_before_deadline).to_dateti&lt;br /&gt;
&lt;br /&gt;
==Automated Testing using RSPEC==&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the mail is enqueued upon the firing of the reminder method.&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the call for reminder method is enqueued upon the firing of the start_reminder method, with the proper scheduled execution time.&lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
NOTE: All the reminder mails except the ones for the reviewer are sent to expertiza.development@gmail.com ,as this is already set in the development environment.&lt;br /&gt;
&lt;br /&gt;
==Steps to verify Functionality==&lt;br /&gt;
&lt;br /&gt;
*Test Email&lt;br /&gt;
 Email: expertiza_test123@outlook.com&lt;br /&gt;
 Password: password98@&lt;br /&gt;
&lt;br /&gt;
*Test Expertiza account&lt;br /&gt;
 name/login: Email_Test_ID1&lt;br /&gt;
 password: password98@&lt;br /&gt;
&lt;br /&gt;
1. Change the existing assignment's due date&lt;br /&gt;
First login in as instructor6, and go to Manage → Assignments&lt;br /&gt;
&lt;br /&gt;
  [[File:manage_page.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then click change edit in the right side menu under **Actions**.&lt;br /&gt;
Go to Due dates and change the **reminder** and **Due &amp;amp; Time**, and click save.&lt;br /&gt;
&lt;br /&gt;
  [[File:edit_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then you should receive email at expertiza_test123@outlook.com in the appropriate time. For this example,16 hours before 2021/10/25 04:08 (US Eastern time zone), the system shall send a reminder email to expertiza_test123@outlook.com.&lt;br /&gt;
&lt;br /&gt;
2. Create a new assignment with valid due date&lt;br /&gt;
Create a new Assignment&lt;br /&gt;
&lt;br /&gt;
  [[File:create_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Assign a valid due date and reminder hours&lt;br /&gt;
&lt;br /&gt;
  [[File:new_date.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then add the previous user login to it as a participant&lt;br /&gt;
&lt;br /&gt;
  [[File:add_participate.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Because the participants was added later, so we need to change the due date once more in order to trigger the email notification jobs.&lt;br /&gt;
&lt;br /&gt;
==Additional Links==&lt;br /&gt;
&lt;br /&gt;
*Git pull link: https://github.com/CuiJinku/expertiza/tree/beta&lt;br /&gt;
*VCL deployment: http://152.7.176.117:8080/&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[https://github.com/CuiJinku/expertiza GitHub Project Repository Fork]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://relishapp.com/rspec Rspec Documentation]&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
[mailto:dli35@ncsu.edu Dong Li]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:ldu2@ncsu.edu Liwen Du]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:jcui23@ncsu.edu Jinku Cui]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140193</id>
		<title>CSC/ECE 517 Fall 2021 - E2135 Email notification to reviewers and instructors</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140193"/>
		<updated>2021-10-27T02:20:39Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Steps to verify Functionality */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based OSS project.&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
==About Expertiza==&lt;br /&gt;
&lt;br /&gt;
[http://expertiza.ncsu.edu/ Expertiza] is an open-source project developed using the Ruby on Rails framework. Expertiza allows the instructor to create new assignments and customize new or existing assignments. The application allows students to submit and peer-review learning objects (articles, code, websites, etc)[1]. Expertiza supports submission across various document types, including the URLs and wiki pages.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; When an assignment or review approaches its deadline on Expertiza, students initially should receive deadline reminder emails at a specific time before the deadline that the instructor has preconfigured. Lack of this functionality sometimes results in students missing their assignment submission deadlines and thus losing marks. Students should receive this type of deadline reminder email. So this amendment to the project involves adding an asynchronous deadline reminder mailer to the application. &lt;br /&gt;
&lt;br /&gt;
==Modified Files==&lt;br /&gt;
* app/models/due_date.rb&lt;br /&gt;
* test/models/due_date.rb&lt;br /&gt;
* db/migrate/20210319212323_create_delayed_jobs.rb&lt;br /&gt;
&lt;br /&gt;
==Pre-config==&lt;br /&gt;
&lt;br /&gt;
We use delayed jobs library to deal with the delayed email. &lt;br /&gt;
We need run following command in order to install delayed_job binary executable.&lt;br /&gt;
&lt;br /&gt;
    rails generate delayed_job&lt;br /&gt;
&lt;br /&gt;
Then we need to run&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job start&lt;br /&gt;
&lt;br /&gt;
This will start a backend server to deal with the delayed jobs.&lt;br /&gt;
If we want to stop, just run following.&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Implementation approach==&lt;br /&gt;
&lt;br /&gt;
'''1) Reminder email sent when assignment or review is approaching deadline:'''&lt;br /&gt;
In the due_date.rb file, whenever a new due date is created or an existing due date is updated, the 'start_reminder' method will be fired which will eventually be added to the delayed_job queue. This job will be executed at a preconfigured time before deadline, where it will fire the method 'reminder' which will be added to the delayed job queue by the handle_asynchronously method of gem 'delayed_job_active_record'. Inside the reminder method, we will fetch three attributes - assignment_id, deadline_type, due_at. These three attributes will be used to decide the deadline type ( submission or review or teammate review ), fetch the participant email for that assignment, fetch the deadline threshold and at the end send the email reminder at a specified threshold time before the deadline which will contain all the details such as assignment names link to assignment and assignment type ( submission or review or teammate review). &lt;br /&gt;
&lt;br /&gt;
'''2) Implement code:''' &lt;br /&gt;
&lt;br /&gt;
  def create_mailer_object&lt;br /&gt;
    Mailer.new&lt;br /&gt;
  end&lt;br /&gt;
  def create_mailworker_object&lt;br /&gt;
    MailWorker.new(self.parent_id, self.deadline_type, self.due_at)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # main function to start email reminder&lt;br /&gt;
  def start_reminder&lt;br /&gt;
    puts when_to_run_reminder&lt;br /&gt;
    if self.changed?&lt;br /&gt;
      @extra_param = self.parent_id.to_s + &amp;quot;,&amp;quot; + self.deadline_type_id.to_s&lt;br /&gt;
      # first deleted existed delayed jobs with same parent_id(which is assignment id actually)&lt;br /&gt;
      Delayed::Job.where(extra_param: @extra_param).each do |job|&lt;br /&gt;
        job.delete&lt;br /&gt;
      end&lt;br /&gt;
      # add a delayed job to the delayed job queue, the job will run at what when_to_run_reminder return&lt;br /&gt;
      run_at_time = when_to_run_reminder&lt;br /&gt;
      if run_at_time &amp;gt;= 0.seconds.from_now&lt;br /&gt;
        self.delay(run_at: run_at_time, :extra_param =&amp;gt; @extra_param).reminder&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def reminder&lt;br /&gt;
    deadline_text = self.deadline_type if %w[submission review].include? self.deadline_type&lt;br /&gt;
    deadline_text = &amp;quot;Team Review&amp;quot; if self.deadline_type == 'metareview'&lt;br /&gt;
    mail_worker = create_mailworker_object&lt;br /&gt;
    email_reminder(mail_worker.find_participant_emails, deadline_text) unless mail_worker.find_participant_emails.empty?&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.parent_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
    Mailer.delayed_message(bcc: emails, subject: subject, body: body).deliver_now&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # after duedate - threshold hours, then we can send the reminder email&lt;br /&gt;
  def when_to_run_reminder&lt;br /&gt;
    hours_before_deadline = self.threshold.hours&lt;br /&gt;
    result = (self.due_at.in_time_zone - hours_before_deadline).to_dateti&lt;br /&gt;
&lt;br /&gt;
==Automated Testing using RSPEC==&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the mail is enqueued upon the firing of the reminder method.&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the call for reminder method is enqueued upon the firing of the start_reminder method, with the proper scheduled execution time.&lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
NOTE: All the reminder mails except the ones for the reviewer are sent to expertiza.development@gmail.com ,as this is already set in the development environment.&lt;br /&gt;
&lt;br /&gt;
==Steps to verify Functionality==&lt;br /&gt;
&lt;br /&gt;
*Test Email&lt;br /&gt;
 Email: expertiza_test123@outlook.com&lt;br /&gt;
 Password: password98@&lt;br /&gt;
&lt;br /&gt;
*Test Expertiza account&lt;br /&gt;
 name/login: Email_Test_ID1&lt;br /&gt;
 password: password98@&lt;br /&gt;
&lt;br /&gt;
1. Change the existing assignment's due date&lt;br /&gt;
First login in as instructor6, and go to Manage → Assignments&lt;br /&gt;
&lt;br /&gt;
  [[File:manage_page.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then click change edit in the right side menu under **Actions**.&lt;br /&gt;
Go to Due dates and change the **reminder** and **Due &amp;amp; Time**, and click save.&lt;br /&gt;
&lt;br /&gt;
  [[File:edit_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then you should receive email at expertiza_test123@outlook.com in the appropriate time. For this example,16 hours before 2021/10/25 04:08 (US Eastern time zone), the system shall send a reminder email to expertiza_test123@outlook.com.&lt;br /&gt;
&lt;br /&gt;
2. Create a new assignment with valid due date&lt;br /&gt;
Create a new Assignment&lt;br /&gt;
&lt;br /&gt;
  [[File:create_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Assign a valid due date and reminder hours&lt;br /&gt;
&lt;br /&gt;
  [[File:new_date.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then add the previous user login to it as a participant&lt;br /&gt;
&lt;br /&gt;
  [[File:add_participate.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Because the participants was added later, so we need to change the due date once more in order to trigger the email notification jobs.&lt;br /&gt;
&lt;br /&gt;
==Additional Links==&lt;br /&gt;
&lt;br /&gt;
*Git pull link: https://github.com/CuiJinku/expertiza/tree/beta&lt;br /&gt;
*VCL deployment: http://152.7.176.117:8080/&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[https://github.com/CuiJinku/expertiza GitHub Project Repository Fork]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://relishapp.com/rspec Rspec Documentation]&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
[mailto:dli35@ncsu.edu Dong Li]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:ldu2@ncsu.edu Liwen Du]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:jcui23@ncsu.edu Jinku Cui]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140184</id>
		<title>CSC/ECE 517 Fall 2021 - E2135 Email notification to reviewers and instructors</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140184"/>
		<updated>2021-10-27T02:04:31Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* Correct links to be new VCL*/&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based OSS project.&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
==About Expertiza==&lt;br /&gt;
&lt;br /&gt;
[http://expertiza.ncsu.edu/ Expertiza] is an open-source project developed using the Ruby on Rails framework. Expertiza allows the instructor to create new assignments and customize new or existing assignments. The application allows students to submit and peer-review learning objects (articles, code, websites, etc)[1]. Expertiza supports submission across various document types, including the URLs and wiki pages.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; When an assignment or review approaches its deadline on Expertiza, students initially should receive deadline reminder emails at a specific time before the deadline that the instructor has preconfigured. Lack of this functionality sometimes results in students missing their assignment submission deadlines and thus losing marks. Students should receive this type of deadline reminder email. So this amendment to the project involves adding an asynchronous deadline reminder mailer to the application. &lt;br /&gt;
&lt;br /&gt;
==Modified Files==&lt;br /&gt;
* app/models/due_date.rb&lt;br /&gt;
* test/models/due_date.rb&lt;br /&gt;
* db/migrate/20210319212323_create_delayed_jobs.rb&lt;br /&gt;
&lt;br /&gt;
==Pre-config==&lt;br /&gt;
&lt;br /&gt;
We use delayed jobs library to deal with the delayed email. &lt;br /&gt;
We need run following command in order to install delayed_job binary executable.&lt;br /&gt;
&lt;br /&gt;
    rails generate delayed_job&lt;br /&gt;
&lt;br /&gt;
Then we need to run&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job start&lt;br /&gt;
&lt;br /&gt;
This will start a backend server to deal with the delayed jobs.&lt;br /&gt;
If we want to stop, just run following.&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Implementation approach==&lt;br /&gt;
&lt;br /&gt;
'''1) Reminder email sent when assignment or review is approaching deadline:'''&lt;br /&gt;
In the due_date.rb file, whenever a new due date is created or an existing due date is updated, the 'start_reminder' method will be fired which will eventually be added to the delayed_job queue. This job will be executed at a preconfigured time before deadline, where it will fire the method 'reminder' which will be added to the delayed job queue by the handle_asynchronously method of gem 'delayed_job_active_record'. Inside the reminder method, we will fetch three attributes - assignment_id, deadline_type, due_at. These three attributes will be used to decide the deadline type ( submission or review or teammate review ), fetch the participant email for that assignment, fetch the deadline threshold and at the end send the email reminder at a specified threshold time before the deadline which will contain all the details such as assignment names link to assignment and assignment type ( submission or review or teammate review). &lt;br /&gt;
&lt;br /&gt;
'''2) Implement code:''' &lt;br /&gt;
&lt;br /&gt;
  def create_mailer_object&lt;br /&gt;
    Mailer.new&lt;br /&gt;
  end&lt;br /&gt;
  def create_mailworker_object&lt;br /&gt;
    MailWorker.new(self.parent_id, self.deadline_type, self.due_at)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # main function to start email reminder&lt;br /&gt;
  def start_reminder&lt;br /&gt;
    puts when_to_run_reminder&lt;br /&gt;
    if self.changed?&lt;br /&gt;
      @extra_param = self.parent_id.to_s + &amp;quot;,&amp;quot; + self.deadline_type_id.to_s&lt;br /&gt;
      # first deleted existed delayed jobs with same parent_id(which is assignment id actually)&lt;br /&gt;
      Delayed::Job.where(extra_param: @extra_param).each do |job|&lt;br /&gt;
        job.delete&lt;br /&gt;
      end&lt;br /&gt;
      # add a delayed job to the delayed job queue, the job will run at what when_to_run_reminder return&lt;br /&gt;
      run_at_time = when_to_run_reminder&lt;br /&gt;
      if run_at_time &amp;gt;= 0.seconds.from_now&lt;br /&gt;
        self.delay(run_at: run_at_time, :extra_param =&amp;gt; @extra_param).reminder&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def reminder&lt;br /&gt;
    deadline_text = self.deadline_type if %w[submission review].include? self.deadline_type&lt;br /&gt;
    deadline_text = &amp;quot;Team Review&amp;quot; if self.deadline_type == 'metareview'&lt;br /&gt;
    mail_worker = create_mailworker_object&lt;br /&gt;
    email_reminder(mail_worker.find_participant_emails, deadline_text) unless mail_worker.find_participant_emails.empty?&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.parent_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
    Mailer.delayed_message(bcc: emails, subject: subject, body: body).deliver_now&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # after duedate - threshold hours, then we can send the reminder email&lt;br /&gt;
  def when_to_run_reminder&lt;br /&gt;
    hours_before_deadline = self.threshold.hours&lt;br /&gt;
    result = (self.due_at.in_time_zone - hours_before_deadline).to_dateti&lt;br /&gt;
&lt;br /&gt;
==Automated Testing using RSPEC==&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the mail is enqueued upon the firing of the reminder method.&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the call for reminder method is enqueued upon the firing of the start_reminder method, with the proper scheduled execution time.&lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
NOTE: All the reminder mails except the ones for the reviewer are sent to expertiza.development@gmail.com ,as this is already set in the development environment.&lt;br /&gt;
&lt;br /&gt;
==Steps to verify Functionality==&lt;br /&gt;
&lt;br /&gt;
*Test Email&lt;br /&gt;
 Email: [expertiza_test123@outlook.com](mailto:expertiza_test123@outlook.com)&lt;br /&gt;
 Password: password98@&lt;br /&gt;
&lt;br /&gt;
*Test Expertiza account&lt;br /&gt;
 name/login: Email_Test_ID1&lt;br /&gt;
 password: password98@&lt;br /&gt;
&lt;br /&gt;
1. Change the existing assignment's due date&lt;br /&gt;
First login in as instructor6, and go to Manage → Assignments&lt;br /&gt;
&lt;br /&gt;
  [[File:manage_page.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then click change edit in the right side menu under **Actions**.&lt;br /&gt;
Go to Due dates and change the **reminder** and **Due &amp;amp; Time**, and click save.&lt;br /&gt;
&lt;br /&gt;
  [[File:edit_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then you should receive email at expertiza_test123@outlook.com in the appropriate time. For this example,16 hours before 2021/10/25 04:08 (US Eastern time zone), the system shall send a reminder email to expertiza_test123@outlook.com.&lt;br /&gt;
&lt;br /&gt;
2. Create a new assignment with valid due date&lt;br /&gt;
Create a new Assignment&lt;br /&gt;
&lt;br /&gt;
  [[File:create_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Assign a valid due date and reminder hours&lt;br /&gt;
&lt;br /&gt;
  [[File:new_date.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then add the previous user login to it as a participant&lt;br /&gt;
&lt;br /&gt;
  [[File:add_participate.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Because the participants was added later, so we need to change the due date once more in order to trigger the email notification jobs.&lt;br /&gt;
&lt;br /&gt;
==Additional Links==&lt;br /&gt;
&lt;br /&gt;
*Git pull link: https://github.com/CuiJinku/expertiza/tree/beta&lt;br /&gt;
*VCL deployment: http://152.7.176.117:8080/&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[https://github.com/CuiJinku/expertiza GitHub Project Repository Fork]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://relishapp.com/rspec Rspec Documentation]&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
[mailto:dli35@ncsu.edu Dong Li]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:ldu2@ncsu.edu Liwen Du]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:jcui23@ncsu.edu Jinku Cui]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140183</id>
		<title>CSC/ECE 517 Fall 2021 - E2135 Email notification to reviewers and instructors</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2021_-_E2135_Email_notification_to_reviewers_and_instructors&amp;diff=140183"/>
		<updated>2021-10-27T02:03:27Z</updated>

		<summary type="html">&lt;p&gt;Jcui23: /* delete the TODO comment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a description of the Expertiza based OSS project.&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
==About Expertiza==&lt;br /&gt;
&lt;br /&gt;
[http://expertiza.ncsu.edu/ Expertiza] is an open-source project developed using the Ruby on Rails framework. Expertiza allows the instructor to create new assignments and customize new or existing assignments. The application allows students to submit and peer-review learning objects (articles, code, websites, etc)[1]. Expertiza supports submission across various document types, including the URLs and wiki pages.&lt;br /&gt;
&lt;br /&gt;
==Problem Statement==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; When an assignment or review approaches its deadline on Expertiza, students initially should receive deadline reminder emails at a specific time before the deadline that the instructor has preconfigured. Lack of this functionality sometimes results in students missing their assignment submission deadlines and thus losing marks. Students should receive this type of deadline reminder email. So this amendment to the project involves adding an asynchronous deadline reminder mailer to the application. &lt;br /&gt;
&lt;br /&gt;
==Modified Files==&lt;br /&gt;
* app/models/due_date.rb&lt;br /&gt;
* test/models/due_date.rb&lt;br /&gt;
* db/migrate/20210319212323_create_delayed_jobs.rb&lt;br /&gt;
&lt;br /&gt;
==Pre-config==&lt;br /&gt;
&lt;br /&gt;
We use delayed jobs library to deal with the delayed email. &lt;br /&gt;
We need run following command in order to install delayed_job binary executable.&lt;br /&gt;
&lt;br /&gt;
    rails generate delayed_job&lt;br /&gt;
&lt;br /&gt;
Then we need to run&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job start&lt;br /&gt;
&lt;br /&gt;
This will start a backend server to deal with the delayed jobs.&lt;br /&gt;
If we want to stop, just run following.&lt;br /&gt;
&lt;br /&gt;
    RAILS_ENV=development bin/delayed_job stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Implementation approach==&lt;br /&gt;
&lt;br /&gt;
'''1) Reminder email sent when assignment or review is approaching deadline:'''&lt;br /&gt;
In the due_date.rb file, whenever a new due date is created or an existing due date is updated, the 'start_reminder' method will be fired which will eventually be added to the delayed_job queue. This job will be executed at a preconfigured time before deadline, where it will fire the method 'reminder' which will be added to the delayed job queue by the handle_asynchronously method of gem 'delayed_job_active_record'. Inside the reminder method, we will fetch three attributes - assignment_id, deadline_type, due_at. These three attributes will be used to decide the deadline type ( submission or review or teammate review ), fetch the participant email for that assignment, fetch the deadline threshold and at the end send the email reminder at a specified threshold time before the deadline which will contain all the details such as assignment names link to assignment and assignment type ( submission or review or teammate review). &lt;br /&gt;
&lt;br /&gt;
'''2) Implement code:''' &lt;br /&gt;
&lt;br /&gt;
  def create_mailer_object&lt;br /&gt;
    Mailer.new&lt;br /&gt;
  end&lt;br /&gt;
  def create_mailworker_object&lt;br /&gt;
    MailWorker.new(self.parent_id, self.deadline_type, self.due_at)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # main function to start email reminder&lt;br /&gt;
  def start_reminder&lt;br /&gt;
    puts when_to_run_reminder&lt;br /&gt;
    if self.changed?&lt;br /&gt;
      @extra_param = self.parent_id.to_s + &amp;quot;,&amp;quot; + self.deadline_type_id.to_s&lt;br /&gt;
      # first deleted existed delayed jobs with same parent_id(which is assignment id actually)&lt;br /&gt;
      Delayed::Job.where(extra_param: @extra_param).each do |job|&lt;br /&gt;
        job.delete&lt;br /&gt;
      end&lt;br /&gt;
      # add a delayed job to the delayed job queue, the job will run at what when_to_run_reminder return&lt;br /&gt;
      run_at_time = when_to_run_reminder&lt;br /&gt;
      if run_at_time &amp;gt;= 0.seconds.from_now&lt;br /&gt;
        self.delay(run_at: run_at_time, :extra_param =&amp;gt; @extra_param).reminder&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def reminder&lt;br /&gt;
    deadline_text = self.deadline_type if %w[submission review].include? self.deadline_type&lt;br /&gt;
    deadline_text = &amp;quot;Team Review&amp;quot; if self.deadline_type == 'metareview'&lt;br /&gt;
    mail_worker = create_mailworker_object&lt;br /&gt;
    email_reminder(mail_worker.find_participant_emails, deadline_text) unless mail_worker.find_participant_emails.empty?&lt;br /&gt;
  end&lt;br /&gt;
  ####&lt;br /&gt;
  def email_reminder(emails, deadline_type)&lt;br /&gt;
    assignment = Assignment.find(self.parent_id)&lt;br /&gt;
    subject = &amp;quot;Message regarding #{deadline_type} for assignment #{assignment.name}&amp;quot;&lt;br /&gt;
    body = &amp;quot;This is a reminder to complete #{deadline_type} for assignment #{assignment.name}. \&lt;br /&gt;
    Deadline is #{self.due_at}.If you have already done the  #{deadline_type}, Please ignore this mail.&amp;quot;&lt;br /&gt;
    emails.each do |mail|&lt;br /&gt;
      Rails.logger.info mail&lt;br /&gt;
    end&lt;br /&gt;
    Mailer.delayed_message(bcc: emails, subject: subject, body: body).deliver_now&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # after duedate - threshold hours, then we can send the reminder email&lt;br /&gt;
  def when_to_run_reminder&lt;br /&gt;
    hours_before_deadline = self.threshold.hours&lt;br /&gt;
    result = (self.due_at.in_time_zone - hours_before_deadline).to_dateti&lt;br /&gt;
&lt;br /&gt;
==Automated Testing using RSPEC==&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the mail is enqueued upon the firing of the reminder method.&lt;br /&gt;
&lt;br /&gt;
We have used Rspec for testing the delayed_job functionalities. Using the test-driven development (TDD) approach, we have added a Rspec test which checks whether the call for reminder method is enqueued upon the firing of the start_reminder method, with the proper scheduled execution time.&lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
NOTE: All the reminder mails except the ones for the reviewer are sent to expertiza.development@gmail.com ,as this is already set in the development environment.&lt;br /&gt;
&lt;br /&gt;
==Steps to verify Functionality==&lt;br /&gt;
&lt;br /&gt;
*Test Email&lt;br /&gt;
 Email: [expertiza_test123@outlook.com](mailto:expertiza_test123@outlook.com)&lt;br /&gt;
 Password: password98@&lt;br /&gt;
&lt;br /&gt;
*Test Expertiza account&lt;br /&gt;
 name/login: Email_Test_ID1&lt;br /&gt;
 password: password98@&lt;br /&gt;
&lt;br /&gt;
1. Change the existing assignment's due date&lt;br /&gt;
First login in as instructor6, and go to Manage → Assignments&lt;br /&gt;
&lt;br /&gt;
  [[File:manage_page.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then click change edit in the right side menu under **Actions**.&lt;br /&gt;
Go to Due dates and change the **reminder** and **Due &amp;amp; Time**, and click save.&lt;br /&gt;
&lt;br /&gt;
  [[File:edit_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then you should receive email at expertiza_test123@outlook.com in the appropriate time. For this example,16 hours before 2021/10/25 04:08 (US Eastern time zone), the system shall send a reminder email to expertiza_test123@outlook.com.&lt;br /&gt;
&lt;br /&gt;
2. Create a new assignment with valid due date&lt;br /&gt;
Create a new Assignment&lt;br /&gt;
&lt;br /&gt;
  [[File:create_assignment.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Assign a valid due date and reminder hours&lt;br /&gt;
&lt;br /&gt;
  [[File:new_date.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Then add the previous user login to it as a participant&lt;br /&gt;
&lt;br /&gt;
  [[File:add_participate.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Because the participants was added later, so we need to change the due date once more in order to trigger the email notification jobs.&lt;br /&gt;
&lt;br /&gt;
==Additional Links==&lt;br /&gt;
&lt;br /&gt;
*Git pull link: https://github.com/CuiJinku/expertiza/tree/beta&lt;br /&gt;
*VCL deployment: http://152.7.98.122:8080/  &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
#[https://github.com/expertiza/expertiza Expertiza on GitHub]&lt;br /&gt;
#[https://github.com/CuiJinku/expertiza GitHub Project Repository Fork]&lt;br /&gt;
#[http://expertiza.ncsu.edu/ The live Expertiza website]&lt;br /&gt;
#[https://relishapp.com/rspec Rspec Documentation]&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
[mailto:dli35@ncsu.edu Dong Li]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:ldu2@ncsu.edu Liwen Du]&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:jcui23@ncsu.edu Jinku Cui]&lt;/div&gt;</summary>
		<author><name>Jcui23</name></author>
	</entry>
</feed>