<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Npatil2</id>
	<title>Expertiza_Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Npatil2"/>
	<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=Special:Contributions/Npatil2"/>
	<updated>2026-05-16T01:01:37Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156801</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156801"/>
		<updated>2024-04-24T03:52:34Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
* Single Responsibility Principle (SRP):&lt;br /&gt;
Each method within the QuestionnaireHelper class appears to adhere to the SRP by focusing on a single task or responsibility. For example, the adjust_advice_size method is responsible for adjusting the size of advice based on questionnaire scores, while the questionnaire_factory method is responsible for creating instances of questionnaire types based on the given type parameter. This adherence ensures that each method has a clear and distinct purpose, promoting code maintainability and readability.&lt;br /&gt;
&lt;br /&gt;
* Open/Closed Principle (OCP):&lt;br /&gt;
While not explicitly evident in the provided snippets, the design allows for extension without modification, which aligns with the OCP. For instance, new types of questionnaires can be added without altering existing code by simply extending the questionnaire_factory method to accommodate the new types.&lt;br /&gt;
&lt;br /&gt;
* Dependency Injection Principle (DIP):&lt;br /&gt;
The methods in the QuestionnaireHelper class accept various objects (e.g., questionnaire, scored_question) as parameters, adhering to the principle of dependency injection. By accepting dependencies from external sources rather than creating them internally, these methods become more flexible and easily testable.&lt;br /&gt;
&lt;br /&gt;
* Factory Method Pattern:&lt;br /&gt;
The questionnaire_factory method can be seen as exhibiting characteristics of the factory method pattern. It dynamically creates instances of different questionnaire types based on the given type parameter, promoting flexibility and extensibility.&lt;br /&gt;
&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .adjust_advice_size method is being described. This method likely adjusts the size of advice related to questions based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a questionnaire, a scored_question, a non_scored_question, and a question_advice.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Question is a ScoredQuestion: This context tests the behavior when the question is a scored question. It sets up expectations related to the adjustment of advice size based on the questionnaire scores.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when question is a ScoredQuestion' do&lt;br /&gt;
      it 'adjusts advice size based on questionnaire scores' do&lt;br /&gt;
    allow(QuestionAdvice).to receive(:where).and_return([])&lt;br /&gt;
    allow(QuestionAdvice).to receive(:new).and_return(double('QuestionAdvice', save: true))&lt;br /&gt;
    allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(true)&lt;br /&gt;
    described_class.adjust_advice_size(questionnaire, scored_question)&lt;br /&gt;
    expect(QuestionAdvice).to have_received(:where).exactly(10).times&lt;br /&gt;
    expect(scored_question.question_advices.size).to eq(10)&lt;br /&gt;
  end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Question is Not a ScoredQuestion: This context tests the behavior when the question is not a scored question. It verifies that in this case, the advice size is not adjusted.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when question is not a ScoredQuestion' do&lt;br /&gt;
      it 'does not adjust advice size' do&lt;br /&gt;
        allow(QuestionAdvice).to receive(:where)&lt;br /&gt;
        allow(QuestionAdvice).to receive(:new)&lt;br /&gt;
        allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(false)&lt;br /&gt;
        described_class.adjust_advice_size(questionnaire, non_scored_question)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:where)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:new)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use allow and expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .questionnaire_factory method is described. It seems to be a factory method responsible for creating instances of different questionnaire types based on the given type parameter.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Given a Valid Type: This context verifies the behavior when a valid questionnaire type is provided. It expects that calling questionnaire_factory with a valid type results in an instance of the specified questionnaire type (ReviewQuestionnaire).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when given a valid type' do&lt;br /&gt;
      it 'returns an instance of the specified questionnaire type' do&lt;br /&gt;
        questionnaire_type = 'ReviewQuestionnaire'&lt;br /&gt;
        expect(helper.questionnaire_factory(questionnaire_type)).to be_an_instance_of(ReviewQuestionnaire)&lt;br /&gt;
      end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Given an Invalid Type: This context tests how the method handles an invalid questionnaire type. It expects that calling questionnaire_factory with an invalid type sets an error flash message indicating that the questionnaire type is undefined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when given an invalid type' do&lt;br /&gt;
      it 'sets an error flash message' do&lt;br /&gt;
        questionnaire_type = 'UnknownQuestionnaire'&lt;br /&gt;
        expect { helper.questionnaire_factory(questionnaire_type) }.to change { flash[:error] }.from(nil).to('Error: Undefined Questionnaire')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect and change statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== update_questionnaire_questions ===&lt;br /&gt;
We did not implement test cases for this method because it was already covered in last semester by previous team.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
* Single Responsibility Principle (SRP):&lt;br /&gt;
Each method within the ReviewBidsHelper class seems to adhere to the SRP. For example, get_intelligent_topic_row_review_bids is responsible for generating HTML code for topic rows, while get_topic_bg_color_review_bids determines the background color for a topic. This adherence ensures that each method has a clear and distinct purpose, enhancing maintainability and readability.&lt;br /&gt;
&lt;br /&gt;
* Factory Method Pattern:&lt;br /&gt;
Although not explicitly labeled as a factory method, get_intelligent_topic_row_review_bids can be interpreted as following a similar pattern. It dynamically generates HTML code based on different scenarios, akin to a factory producing instances of objects. This promotes flexibility and extensibility in generating HTML representations of topics.&lt;br /&gt;
&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb. The #get_intelligent_topic_row_review_bids method is being described. This method likely generates HTML code for displaying topic rows based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a selected_topic, the number of participants (num_participants), and a review_bid.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system. For example, selected_topic is set with a topic ID of 1 and is not waitlisted initially.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Selected Topics are Present: This context tests the behavior of the method when there are selected topics. It sets up a selected topic that is not waitlisted and expects the generated HTML to include a table row with a yellow background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(false)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;yellow&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topic is Waitlisted: This context tests the behavior when the selected topic is waitlisted. It sets up a selected topic that is waitlisted and expects the generated HTML to include a table row with a light gray background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Not Present: This context checks what happens when there are no selected topics. It mocks a method (get_topic_bg_color_review_bids) to return a specific background color and expects the generated HTML to include a table row with that background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are not present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        allow(helper).to receive(:get_topic_bg_color_review_bids).and_return('rgb(255,255,255)')&lt;br /&gt;
        selected_topics = []&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(255,255,255)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Nil: This context tests how the method handles cases where the selected topics parameter is nil. It expects the generated HTML to include a table row with a specific background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are nil' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topics = nil&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(47,352,0)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each context contains an it block with an expectation. These expectations use expect statements to verify that the HTML generated by the method meets certain criteria, such as containing specific table row elements with appropriate background colors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template. The #get_topic_bg_color_review_bids method is being described. This method likely determines the background color for a topic based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* Returns RGB Color Code: This test checks if the method returns an RGB color code for the topic background color. It sets up an expectation that the returned color code matches the pattern rgb(\d+,\d+,\d+), indicating it's in the correct format.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when there are no review bids' do&lt;br /&gt;
      it 'returns default RGB color code' do&lt;br /&gt;
        allow(ReviewBid).to receive(:where).with(signuptopic_id: topic.id).and_return([])&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_topic_bg_color_review_bids(topic, num_participants)).to eq('rgb(47,352,0)')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When There Are No Review Bids: This context tests the behavior when there are no review bids associated with the topic. It sets up an expectation that the method returns a default RGB color code (rgb(47,352,0)), likely indicating a green color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Coverage Results ==&lt;br /&gt;
=== Questionnaire_helper ===&lt;br /&gt;
* Previous coverage: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb&lt;br /&gt;
* Current coverage:&lt;br /&gt;
&lt;br /&gt;
[[File:Questionnaire helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Review bids helper ===&lt;br /&gt;
* Previous coverage: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb&lt;br /&gt;
* Current coverage: &lt;br /&gt;
&lt;br /&gt;
[[File:Review bid helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
This document outlines a thorough plan to enhance testing and code coverage for the questionnaire_helper and review_bids_helper files in Expertiza. With defined objectives, including developing detailed test plans and scenarios, the project aims to address current code coverage gaps. Notably, significant improvements were achieved in the review_bids_helper.rb file, with comprehensive test plans substantially increasing code coverage. Conversely, the questionnaire_helper.rb file saw marginal improvements, primarily due to existing full coverage in the update_questionnaire_questions method. Moving forward, the project will focus on implementing the outlined test plans, ensuring comprehensive testing and reliability for critical functionality across both helper files.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
* Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
* Link to Testing video: [https://youtu.be/jDfmPUgDDXA]&lt;br /&gt;
&lt;br /&gt;
* Link to pull request : [https://github.com/expertiza/expertiza/pull/2799 here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156793</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156793"/>
		<updated>2024-04-24T03:49:47Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
* Single Responsibility Principle (SRP):&lt;br /&gt;
Each method within the QuestionnaireHelper class appears to adhere to the SRP by focusing on a single task or responsibility. For example, the adjust_advice_size method is responsible for adjusting the size of advice based on questionnaire scores, while the questionnaire_factory method is responsible for creating instances of questionnaire types based on the given type parameter. This adherence ensures that each method has a clear and distinct purpose, promoting code maintainability and readability.&lt;br /&gt;
&lt;br /&gt;
* Open/Closed Principle (OCP):&lt;br /&gt;
While not explicitly evident in the provided snippets, the design allows for extension without modification, which aligns with the OCP. For instance, new types of questionnaires can be added without altering existing code by simply extending the questionnaire_factory method to accommodate the new types.&lt;br /&gt;
&lt;br /&gt;
* Dependency Injection Principle (DIP):&lt;br /&gt;
The methods in the QuestionnaireHelper class accept various objects (e.g., questionnaire, scored_question) as parameters, adhering to the principle of dependency injection. By accepting dependencies from external sources rather than creating them internally, these methods become more flexible and easily testable.&lt;br /&gt;
&lt;br /&gt;
* Factory Method Pattern:&lt;br /&gt;
The questionnaire_factory method can be seen as exhibiting characteristics of the factory method pattern. It dynamically creates instances of different questionnaire types based on the given type parameter, promoting flexibility and extensibility.&lt;br /&gt;
&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .adjust_advice_size method is being described. This method likely adjusts the size of advice related to questions based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a questionnaire, a scored_question, a non_scored_question, and a question_advice.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Question is a ScoredQuestion: This context tests the behavior when the question is a scored question. It sets up expectations related to the adjustment of advice size based on the questionnaire scores.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when question is a ScoredQuestion' do&lt;br /&gt;
      it 'adjusts advice size based on questionnaire scores' do&lt;br /&gt;
    allow(QuestionAdvice).to receive(:where).and_return([])&lt;br /&gt;
    allow(QuestionAdvice).to receive(:new).and_return(double('QuestionAdvice', save: true))&lt;br /&gt;
    allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(true)&lt;br /&gt;
    described_class.adjust_advice_size(questionnaire, scored_question)&lt;br /&gt;
    expect(QuestionAdvice).to have_received(:where).exactly(10).times&lt;br /&gt;
    expect(scored_question.question_advices.size).to eq(10)&lt;br /&gt;
  end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Question is Not a ScoredQuestion: This context tests the behavior when the question is not a scored question. It verifies that in this case, the advice size is not adjusted.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when question is not a ScoredQuestion' do&lt;br /&gt;
      it 'does not adjust advice size' do&lt;br /&gt;
        allow(QuestionAdvice).to receive(:where)&lt;br /&gt;
        allow(QuestionAdvice).to receive(:new)&lt;br /&gt;
        allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(false)&lt;br /&gt;
        described_class.adjust_advice_size(questionnaire, non_scored_question)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:where)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:new)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use allow and expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .questionnaire_factory method is described. It seems to be a factory method responsible for creating instances of different questionnaire types based on the given type parameter.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Given a Valid Type: This context verifies the behavior when a valid questionnaire type is provided. It expects that calling questionnaire_factory with a valid type results in an instance of the specified questionnaire type (ReviewQuestionnaire).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when given a valid type' do&lt;br /&gt;
      it 'returns an instance of the specified questionnaire type' do&lt;br /&gt;
        questionnaire_type = 'ReviewQuestionnaire'&lt;br /&gt;
        expect(helper.questionnaire_factory(questionnaire_type)).to be_an_instance_of(ReviewQuestionnaire)&lt;br /&gt;
      end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Given an Invalid Type: This context tests how the method handles an invalid questionnaire type. It expects that calling questionnaire_factory with an invalid type sets an error flash message indicating that the questionnaire type is undefined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when given an invalid type' do&lt;br /&gt;
      it 'sets an error flash message' do&lt;br /&gt;
        questionnaire_type = 'UnknownQuestionnaire'&lt;br /&gt;
        expect { helper.questionnaire_factory(questionnaire_type) }.to change { flash[:error] }.from(nil).to('Error: Undefined Questionnaire')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect and change statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== update_questionnaire_questions ===&lt;br /&gt;
We did not implement test cases for this method because it was already covered in last semester by previous team.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
* Single Responsibility Principle (SRP):&lt;br /&gt;
Each method within the ReviewBidsHelper class seems to adhere to the SRP. For example, get_intelligent_topic_row_review_bids is responsible for generating HTML code for topic rows, while get_topic_bg_color_review_bids determines the background color for a topic. This adherence ensures that each method has a clear and distinct purpose, enhancing maintainability and readability.&lt;br /&gt;
&lt;br /&gt;
* Factory Method Pattern:&lt;br /&gt;
Although not explicitly labeled as a factory method, get_intelligent_topic_row_review_bids can be interpreted as following a similar pattern. It dynamically generates HTML code based on different scenarios, akin to a factory producing instances of objects. This promotes flexibility and extensibility in generating HTML representations of topics.&lt;br /&gt;
&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb. The #get_intelligent_topic_row_review_bids method is being described. This method likely generates HTML code for displaying topic rows based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a selected_topic, the number of participants (num_participants), and a review_bid.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system. For example, selected_topic is set with a topic ID of 1 and is not waitlisted initially.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Selected Topics are Present: This context tests the behavior of the method when there are selected topics. It sets up a selected topic that is not waitlisted and expects the generated HTML to include a table row with a yellow background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(false)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;yellow&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topic is Waitlisted: This context tests the behavior when the selected topic is waitlisted. It sets up a selected topic that is waitlisted and expects the generated HTML to include a table row with a light gray background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Not Present: This context checks what happens when there are no selected topics. It mocks a method (get_topic_bg_color_review_bids) to return a specific background color and expects the generated HTML to include a table row with that background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are not present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        allow(helper).to receive(:get_topic_bg_color_review_bids).and_return('rgb(255,255,255)')&lt;br /&gt;
        selected_topics = []&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(255,255,255)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Nil: This context tests how the method handles cases where the selected topics parameter is nil. It expects the generated HTML to include a table row with a specific background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are nil' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topics = nil&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(47,352,0)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each context contains an it block with an expectation. These expectations use expect statements to verify that the HTML generated by the method meets certain criteria, such as containing specific table row elements with appropriate background colors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template. The #get_topic_bg_color_review_bids method is being described. This method likely determines the background color for a topic based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* Returns RGB Color Code: This test checks if the method returns an RGB color code for the topic background color. It sets up an expectation that the returned color code matches the pattern rgb(\d+,\d+,\d+), indicating it's in the correct format.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when there are no review bids' do&lt;br /&gt;
      it 'returns default RGB color code' do&lt;br /&gt;
        allow(ReviewBid).to receive(:where).with(signuptopic_id: topic.id).and_return([])&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_topic_bg_color_review_bids(topic, num_participants)).to eq('rgb(47,352,0)')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When There Are No Review Bids: This context tests the behavior when there are no review bids associated with the topic. It sets up an expectation that the method returns a default RGB color code (rgb(47,352,0)), likely indicating a green color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Coverage Results ==&lt;br /&gt;
=== Questionnaire_helper ===&lt;br /&gt;
* Previous coverage: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb&lt;br /&gt;
* Current coverage:&lt;br /&gt;
&lt;br /&gt;
[[File:Questionnaire helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Review bids helper ===&lt;br /&gt;
* Previous coverage: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb&lt;br /&gt;
* Current coverage: &lt;br /&gt;
&lt;br /&gt;
[[File:Review bid helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
This document outlines a thorough plan to enhance testing and code coverage for the questionnaire_helper and review_bids_helper files in Expertiza. With defined objectives, including developing detailed test plans and scenarios, the project aims to address current code coverage gaps. Notably, significant improvements were achieved in the review_bids_helper.rb file, with comprehensive test plans substantially increasing code coverage. Conversely, the questionnaire_helper.rb file saw marginal improvements, primarily due to existing full coverage in the update_questionnaire_questions method. Moving forward, the project will focus on implementing the outlined test plans, ensuring comprehensive testing and reliability for critical functionality across both helper files.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
Link to Testing video: [https://youtu.be/jDfmPUgDDXA]&lt;br /&gt;
Link to pull request : [https://github.com/expertiza/expertiza/pull/2799 here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156785</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156785"/>
		<updated>2024-04-24T03:46:55Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 1: Develop code testing scenarios for questionnaire_helper */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
* Single Responsibility Principle (SRP):&lt;br /&gt;
Each method within the QuestionnaireHelper class appears to adhere to the SRP by focusing on a single task or responsibility. For example, the adjust_advice_size method is responsible for adjusting the size of advice based on questionnaire scores, while the questionnaire_factory method is responsible for creating instances of questionnaire types based on the given type parameter. This adherence ensures that each method has a clear and distinct purpose, promoting code maintainability and readability.&lt;br /&gt;
&lt;br /&gt;
* Open/Closed Principle (OCP):&lt;br /&gt;
While not explicitly evident in the provided snippets, the design allows for extension without modification, which aligns with the OCP. For instance, new types of questionnaires can be added without altering existing code by simply extending the questionnaire_factory method to accommodate the new types.&lt;br /&gt;
&lt;br /&gt;
* Dependency Injection Principle (DIP):&lt;br /&gt;
The methods in the QuestionnaireHelper class accept various objects (e.g., questionnaire, scored_question) as parameters, adhering to the principle of dependency injection. By accepting dependencies from external sources rather than creating them internally, these methods become more flexible and easily testable.&lt;br /&gt;
&lt;br /&gt;
* Factory Method Pattern:&lt;br /&gt;
The questionnaire_factory method can be seen as exhibiting characteristics of the factory method pattern. It dynamically creates instances of different questionnaire types based on the given type parameter, promoting flexibility and extensibility.&lt;br /&gt;
&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .adjust_advice_size method is being described. This method likely adjusts the size of advice related to questions based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a questionnaire, a scored_question, a non_scored_question, and a question_advice.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Question is a ScoredQuestion: This context tests the behavior when the question is a scored question. It sets up expectations related to the adjustment of advice size based on the questionnaire scores.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when question is a ScoredQuestion' do&lt;br /&gt;
      it 'adjusts advice size based on questionnaire scores' do&lt;br /&gt;
    allow(QuestionAdvice).to receive(:where).and_return([])&lt;br /&gt;
    allow(QuestionAdvice).to receive(:new).and_return(double('QuestionAdvice', save: true))&lt;br /&gt;
    allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(true)&lt;br /&gt;
    described_class.adjust_advice_size(questionnaire, scored_question)&lt;br /&gt;
    expect(QuestionAdvice).to have_received(:where).exactly(10).times&lt;br /&gt;
    expect(scored_question.question_advices.size).to eq(10)&lt;br /&gt;
  end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Question is Not a ScoredQuestion: This context tests the behavior when the question is not a scored question. It verifies that in this case, the advice size is not adjusted.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when question is not a ScoredQuestion' do&lt;br /&gt;
      it 'does not adjust advice size' do&lt;br /&gt;
        allow(QuestionAdvice).to receive(:where)&lt;br /&gt;
        allow(QuestionAdvice).to receive(:new)&lt;br /&gt;
        allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(false)&lt;br /&gt;
        described_class.adjust_advice_size(questionnaire, non_scored_question)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:where)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:new)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use allow and expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .questionnaire_factory method is described. It seems to be a factory method responsible for creating instances of different questionnaire types based on the given type parameter.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Given a Valid Type: This context verifies the behavior when a valid questionnaire type is provided. It expects that calling questionnaire_factory with a valid type results in an instance of the specified questionnaire type (ReviewQuestionnaire).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when given a valid type' do&lt;br /&gt;
      it 'returns an instance of the specified questionnaire type' do&lt;br /&gt;
        questionnaire_type = 'ReviewQuestionnaire'&lt;br /&gt;
        expect(helper.questionnaire_factory(questionnaire_type)).to be_an_instance_of(ReviewQuestionnaire)&lt;br /&gt;
      end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Given an Invalid Type: This context tests how the method handles an invalid questionnaire type. It expects that calling questionnaire_factory with an invalid type sets an error flash message indicating that the questionnaire type is undefined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when given an invalid type' do&lt;br /&gt;
      it 'sets an error flash message' do&lt;br /&gt;
        questionnaire_type = 'UnknownQuestionnaire'&lt;br /&gt;
        expect { helper.questionnaire_factory(questionnaire_type) }.to change { flash[:error] }.from(nil).to('Error: Undefined Questionnaire')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect and change statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== update_questionnaire_questions ===&lt;br /&gt;
We did not implement test cases for this method because it was already covered in last semester by previous team.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
* Single Responsibility Principle (SRP):&lt;br /&gt;
Each method within the ReviewBidsHelper class seems to adhere to the SRP. For example, get_intelligent_topic_row_review_bids is responsible for generating HTML code for topic rows, while get_topic_bg_color_review_bids determines the background color for a topic. This adherence ensures that each method has a clear and distinct purpose, enhancing maintainability and readability.&lt;br /&gt;
&lt;br /&gt;
* Factory Method Pattern:&lt;br /&gt;
Although not explicitly labeled as a factory method, get_intelligent_topic_row_review_bids can be interpreted as following a similar pattern. It dynamically generates HTML code based on different scenarios, akin to a factory producing instances of objects. This promotes flexibility and extensibility in generating HTML representations of topics.&lt;br /&gt;
&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb. The #get_intelligent_topic_row_review_bids method is being described. This method likely generates HTML code for displaying topic rows based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a selected_topic, the number of participants (num_participants), and a review_bid.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system. For example, selected_topic is set with a topic ID of 1 and is not waitlisted initially.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Selected Topics are Present: This context tests the behavior of the method when there are selected topics. It sets up a selected topic that is not waitlisted and expects the generated HTML to include a table row with a yellow background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(false)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;yellow&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topic is Waitlisted: This context tests the behavior when the selected topic is waitlisted. It sets up a selected topic that is waitlisted and expects the generated HTML to include a table row with a light gray background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Not Present: This context checks what happens when there are no selected topics. It mocks a method (get_topic_bg_color_review_bids) to return a specific background color and expects the generated HTML to include a table row with that background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are not present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        allow(helper).to receive(:get_topic_bg_color_review_bids).and_return('rgb(255,255,255)')&lt;br /&gt;
        selected_topics = []&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(255,255,255)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Nil: This context tests how the method handles cases where the selected topics parameter is nil. It expects the generated HTML to include a table row with a specific background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are nil' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topics = nil&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(47,352,0)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each context contains an it block with an expectation. These expectations use expect statements to verify that the HTML generated by the method meets certain criteria, such as containing specific table row elements with appropriate background colors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template. The #get_topic_bg_color_review_bids method is being described. This method likely determines the background color for a topic based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* Returns RGB Color Code: This test checks if the method returns an RGB color code for the topic background color. It sets up an expectation that the returned color code matches the pattern rgb(\d+,\d+,\d+), indicating it's in the correct format.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when there are no review bids' do&lt;br /&gt;
      it 'returns default RGB color code' do&lt;br /&gt;
        allow(ReviewBid).to receive(:where).with(signuptopic_id: topic.id).and_return([])&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_topic_bg_color_review_bids(topic, num_participants)).to eq('rgb(47,352,0)')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When There Are No Review Bids: This context tests the behavior when there are no review bids associated with the topic. It sets up an expectation that the method returns a default RGB color code (rgb(47,352,0)), likely indicating a green color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Coverage Results ==&lt;br /&gt;
=== Questionnaire_helper ===&lt;br /&gt;
* Previous coverage: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb&lt;br /&gt;
* Current coverage:&lt;br /&gt;
&lt;br /&gt;
[[File:Questionnaire helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Review bids helper ===&lt;br /&gt;
* Previous coverage: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb&lt;br /&gt;
* Current coverage: &lt;br /&gt;
&lt;br /&gt;
[[File:Review bid helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
This document outlines a thorough plan to enhance testing and code coverage for the questionnaire_helper and review_bids_helper files in Expertiza. With defined objectives, including developing detailed test plans and scenarios, the project aims to address current code coverage gaps. Notably, significant improvements were achieved in the review_bids_helper.rb file, with comprehensive test plans substantially increasing code coverage. Conversely, the questionnaire_helper.rb file saw marginal improvements, primarily due to existing full coverage in the update_questionnaire_questions method. Moving forward, the project will focus on implementing the outlined test plans, ensuring comprehensive testing and reliability for critical functionality across both helper files.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
Link to Testing video: [https://drive.google.com/file/d/1qkfiUrc_NGDQWj7bTXBXNyIqyJAKQO2S/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156783</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156783"/>
		<updated>2024-04-24T03:45:54Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 2: Develop code testing scenarios for review_bids_helper */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .adjust_advice_size method is being described. This method likely adjusts the size of advice related to questions based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a questionnaire, a scored_question, a non_scored_question, and a question_advice.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Question is a ScoredQuestion: This context tests the behavior when the question is a scored question. It sets up expectations related to the adjustment of advice size based on the questionnaire scores.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when question is a ScoredQuestion' do&lt;br /&gt;
      it 'adjusts advice size based on questionnaire scores' do&lt;br /&gt;
    allow(QuestionAdvice).to receive(:where).and_return([])&lt;br /&gt;
    allow(QuestionAdvice).to receive(:new).and_return(double('QuestionAdvice', save: true))&lt;br /&gt;
    allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(true)&lt;br /&gt;
    described_class.adjust_advice_size(questionnaire, scored_question)&lt;br /&gt;
    expect(QuestionAdvice).to have_received(:where).exactly(10).times&lt;br /&gt;
    expect(scored_question.question_advices.size).to eq(10)&lt;br /&gt;
  end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Question is Not a ScoredQuestion: This context tests the behavior when the question is not a scored question. It verifies that in this case, the advice size is not adjusted.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when question is not a ScoredQuestion' do&lt;br /&gt;
      it 'does not adjust advice size' do&lt;br /&gt;
        allow(QuestionAdvice).to receive(:where)&lt;br /&gt;
        allow(QuestionAdvice).to receive(:new)&lt;br /&gt;
        allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(false)&lt;br /&gt;
        described_class.adjust_advice_size(questionnaire, non_scored_question)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:where)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:new)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use allow and expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .questionnaire_factory method is described. It seems to be a factory method responsible for creating instances of different questionnaire types based on the given type parameter.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Given a Valid Type: This context verifies the behavior when a valid questionnaire type is provided. It expects that calling questionnaire_factory with a valid type results in an instance of the specified questionnaire type (ReviewQuestionnaire).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when given a valid type' do&lt;br /&gt;
      it 'returns an instance of the specified questionnaire type' do&lt;br /&gt;
        questionnaire_type = 'ReviewQuestionnaire'&lt;br /&gt;
        expect(helper.questionnaire_factory(questionnaire_type)).to be_an_instance_of(ReviewQuestionnaire)&lt;br /&gt;
      end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Given an Invalid Type: This context tests how the method handles an invalid questionnaire type. It expects that calling questionnaire_factory with an invalid type sets an error flash message indicating that the questionnaire type is undefined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when given an invalid type' do&lt;br /&gt;
      it 'sets an error flash message' do&lt;br /&gt;
        questionnaire_type = 'UnknownQuestionnaire'&lt;br /&gt;
        expect { helper.questionnaire_factory(questionnaire_type) }.to change { flash[:error] }.from(nil).to('Error: Undefined Questionnaire')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect and change statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== update_questionnaire_questions ===&lt;br /&gt;
We did not implement test cases for this method because it was already covered in last semester by previous team.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
* Single Responsibility Principle (SRP):&lt;br /&gt;
Each method within the ReviewBidsHelper class seems to adhere to the SRP. For example, get_intelligent_topic_row_review_bids is responsible for generating HTML code for topic rows, while get_topic_bg_color_review_bids determines the background color for a topic. This adherence ensures that each method has a clear and distinct purpose, enhancing maintainability and readability.&lt;br /&gt;
&lt;br /&gt;
* Factory Method Pattern:&lt;br /&gt;
Although not explicitly labeled as a factory method, get_intelligent_topic_row_review_bids can be interpreted as following a similar pattern. It dynamically generates HTML code based on different scenarios, akin to a factory producing instances of objects. This promotes flexibility and extensibility in generating HTML representations of topics.&lt;br /&gt;
&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb. The #get_intelligent_topic_row_review_bids method is being described. This method likely generates HTML code for displaying topic rows based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a selected_topic, the number of participants (num_participants), and a review_bid.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system. For example, selected_topic is set with a topic ID of 1 and is not waitlisted initially.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Selected Topics are Present: This context tests the behavior of the method when there are selected topics. It sets up a selected topic that is not waitlisted and expects the generated HTML to include a table row with a yellow background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(false)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;yellow&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topic is Waitlisted: This context tests the behavior when the selected topic is waitlisted. It sets up a selected topic that is waitlisted and expects the generated HTML to include a table row with a light gray background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Not Present: This context checks what happens when there are no selected topics. It mocks a method (get_topic_bg_color_review_bids) to return a specific background color and expects the generated HTML to include a table row with that background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are not present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        allow(helper).to receive(:get_topic_bg_color_review_bids).and_return('rgb(255,255,255)')&lt;br /&gt;
        selected_topics = []&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(255,255,255)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Nil: This context tests how the method handles cases where the selected topics parameter is nil. It expects the generated HTML to include a table row with a specific background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are nil' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topics = nil&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(47,352,0)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each context contains an it block with an expectation. These expectations use expect statements to verify that the HTML generated by the method meets certain criteria, such as containing specific table row elements with appropriate background colors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template. The #get_topic_bg_color_review_bids method is being described. This method likely determines the background color for a topic based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* Returns RGB Color Code: This test checks if the method returns an RGB color code for the topic background color. It sets up an expectation that the returned color code matches the pattern rgb(\d+,\d+,\d+), indicating it's in the correct format.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when there are no review bids' do&lt;br /&gt;
      it 'returns default RGB color code' do&lt;br /&gt;
        allow(ReviewBid).to receive(:where).with(signuptopic_id: topic.id).and_return([])&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_topic_bg_color_review_bids(topic, num_participants)).to eq('rgb(47,352,0)')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When There Are No Review Bids: This context tests the behavior when there are no review bids associated with the topic. It sets up an expectation that the method returns a default RGB color code (rgb(47,352,0)), likely indicating a green color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Coverage Results ==&lt;br /&gt;
=== Questionnaire_helper ===&lt;br /&gt;
* Previous coverage: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb&lt;br /&gt;
* Current coverage:&lt;br /&gt;
&lt;br /&gt;
[[File:Questionnaire helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Review bids helper ===&lt;br /&gt;
* Previous coverage: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb&lt;br /&gt;
* Current coverage: &lt;br /&gt;
&lt;br /&gt;
[[File:Review bid helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
This document outlines a thorough plan to enhance testing and code coverage for the questionnaire_helper and review_bids_helper files in Expertiza. With defined objectives, including developing detailed test plans and scenarios, the project aims to address current code coverage gaps. Notably, significant improvements were achieved in the review_bids_helper.rb file, with comprehensive test plans substantially increasing code coverage. Conversely, the questionnaire_helper.rb file saw marginal improvements, primarily due to existing full coverage in the update_questionnaire_questions method. Moving forward, the project will focus on implementing the outlined test plans, ensuring comprehensive testing and reliability for critical functionality across both helper files.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
Link to Testing video: [https://drive.google.com/file/d/1qkfiUrc_NGDQWj7bTXBXNyIqyJAKQO2S/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156774</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156774"/>
		<updated>2024-04-24T03:43:13Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .adjust_advice_size method is being described. This method likely adjusts the size of advice related to questions based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a questionnaire, a scored_question, a non_scored_question, and a question_advice.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Question is a ScoredQuestion: This context tests the behavior when the question is a scored question. It sets up expectations related to the adjustment of advice size based on the questionnaire scores.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when question is a ScoredQuestion' do&lt;br /&gt;
      it 'adjusts advice size based on questionnaire scores' do&lt;br /&gt;
    allow(QuestionAdvice).to receive(:where).and_return([])&lt;br /&gt;
    allow(QuestionAdvice).to receive(:new).and_return(double('QuestionAdvice', save: true))&lt;br /&gt;
    allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(true)&lt;br /&gt;
    described_class.adjust_advice_size(questionnaire, scored_question)&lt;br /&gt;
    expect(QuestionAdvice).to have_received(:where).exactly(10).times&lt;br /&gt;
    expect(scored_question.question_advices.size).to eq(10)&lt;br /&gt;
  end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Question is Not a ScoredQuestion: This context tests the behavior when the question is not a scored question. It verifies that in this case, the advice size is not adjusted.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when question is not a ScoredQuestion' do&lt;br /&gt;
      it 'does not adjust advice size' do&lt;br /&gt;
        allow(QuestionAdvice).to receive(:where)&lt;br /&gt;
        allow(QuestionAdvice).to receive(:new)&lt;br /&gt;
        allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(false)&lt;br /&gt;
        described_class.adjust_advice_size(questionnaire, non_scored_question)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:where)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:new)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use allow and expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .questionnaire_factory method is described. It seems to be a factory method responsible for creating instances of different questionnaire types based on the given type parameter.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Given a Valid Type: This context verifies the behavior when a valid questionnaire type is provided. It expects that calling questionnaire_factory with a valid type results in an instance of the specified questionnaire type (ReviewQuestionnaire).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when given a valid type' do&lt;br /&gt;
      it 'returns an instance of the specified questionnaire type' do&lt;br /&gt;
        questionnaire_type = 'ReviewQuestionnaire'&lt;br /&gt;
        expect(helper.questionnaire_factory(questionnaire_type)).to be_an_instance_of(ReviewQuestionnaire)&lt;br /&gt;
      end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Given an Invalid Type: This context tests how the method handles an invalid questionnaire type. It expects that calling questionnaire_factory with an invalid type sets an error flash message indicating that the questionnaire type is undefined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when given an invalid type' do&lt;br /&gt;
      it 'sets an error flash message' do&lt;br /&gt;
        questionnaire_type = 'UnknownQuestionnaire'&lt;br /&gt;
        expect { helper.questionnaire_factory(questionnaire_type) }.to change { flash[:error] }.from(nil).to('Error: Undefined Questionnaire')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect and change statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== update_questionnaire_questions ===&lt;br /&gt;
We did not implement test cases for this method because it was already covered in last semester by previous team.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb. The #get_intelligent_topic_row_review_bids method is being described. This method likely generates HTML code for displaying topic rows based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a selected_topic, the number of participants (num_participants), and a review_bid.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system. For example, selected_topic is set with a topic ID of 1 and is not waitlisted initially.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Selected Topics are Present: This context tests the behavior of the method when there are selected topics. It sets up a selected topic that is not waitlisted and expects the generated HTML to include a table row with a yellow background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(false)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;yellow&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topic is Waitlisted: This context tests the behavior when the selected topic is waitlisted. It sets up a selected topic that is waitlisted and expects the generated HTML to include a table row with a light gray background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Not Present: This context checks what happens when there are no selected topics. It mocks a method (get_topic_bg_color_review_bids) to return a specific background color and expects the generated HTML to include a table row with that background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are not present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        allow(helper).to receive(:get_topic_bg_color_review_bids).and_return('rgb(255,255,255)')&lt;br /&gt;
        selected_topics = []&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(255,255,255)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Nil: This context tests how the method handles cases where the selected topics parameter is nil. It expects the generated HTML to include a table row with a specific background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are nil' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topics = nil&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(47,352,0)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each context contains an it block with an expectation. These expectations use expect statements to verify that the HTML generated by the method meets certain criteria, such as containing specific table row elements with appropriate background colors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template. The #get_topic_bg_color_review_bids method is being described. This method likely determines the background color for a topic based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* Returns RGB Color Code: This test checks if the method returns an RGB color code for the topic background color. It sets up an expectation that the returned color code matches the pattern rgb(\d+,\d+,\d+), indicating it's in the correct format.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when there are no review bids' do&lt;br /&gt;
      it 'returns default RGB color code' do&lt;br /&gt;
        allow(ReviewBid).to receive(:where).with(signuptopic_id: topic.id).and_return([])&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_topic_bg_color_review_bids(topic, num_participants)).to eq('rgb(47,352,0)')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When There Are No Review Bids: This context tests the behavior when there are no review bids associated with the topic. It sets up an expectation that the method returns a default RGB color code (rgb(47,352,0)), likely indicating a green color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Coverage Results ==&lt;br /&gt;
=== Questionnaire_helper ===&lt;br /&gt;
* Previous coverage: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb&lt;br /&gt;
* Current coverage:&lt;br /&gt;
&lt;br /&gt;
[[File:Questionnaire helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Review bids helper ===&lt;br /&gt;
* Previous coverage: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb&lt;br /&gt;
* Current coverage: &lt;br /&gt;
&lt;br /&gt;
[[File:Review bid helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
This document outlines a thorough plan to enhance testing and code coverage for the questionnaire_helper and review_bids_helper files in Expertiza. With defined objectives, including developing detailed test plans and scenarios, the project aims to address current code coverage gaps. Notably, significant improvements were achieved in the review_bids_helper.rb file, with comprehensive test plans substantially increasing code coverage. Conversely, the questionnaire_helper.rb file saw marginal improvements, primarily due to existing full coverage in the update_questionnaire_questions method. Moving forward, the project will focus on implementing the outlined test plans, ensuring comprehensive testing and reliability for critical functionality across both helper files.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
Link to Testing video: [https://drive.google.com/file/d/1qkfiUrc_NGDQWj7bTXBXNyIqyJAKQO2S/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156757</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156757"/>
		<updated>2024-04-24T03:34:50Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 1: Develop code testing scenarios for questionnaire_helper */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .adjust_advice_size method is being described. This method likely adjusts the size of advice related to questions based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a questionnaire, a scored_question, a non_scored_question, and a question_advice.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Question is a ScoredQuestion: This context tests the behavior when the question is a scored question. It sets up expectations related to the adjustment of advice size based on the questionnaire scores.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when question is a ScoredQuestion' do&lt;br /&gt;
      it 'adjusts advice size based on questionnaire scores' do&lt;br /&gt;
    allow(QuestionAdvice).to receive(:where).and_return([])&lt;br /&gt;
    allow(QuestionAdvice).to receive(:new).and_return(double('QuestionAdvice', save: true))&lt;br /&gt;
    allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(true)&lt;br /&gt;
    described_class.adjust_advice_size(questionnaire, scored_question)&lt;br /&gt;
    expect(QuestionAdvice).to have_received(:where).exactly(10).times&lt;br /&gt;
    expect(scored_question.question_advices.size).to eq(10)&lt;br /&gt;
  end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Question is Not a ScoredQuestion: This context tests the behavior when the question is not a scored question. It verifies that in this case, the advice size is not adjusted.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when question is not a ScoredQuestion' do&lt;br /&gt;
      it 'does not adjust advice size' do&lt;br /&gt;
        allow(QuestionAdvice).to receive(:where)&lt;br /&gt;
        allow(QuestionAdvice).to receive(:new)&lt;br /&gt;
        allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(false)&lt;br /&gt;
        described_class.adjust_advice_size(questionnaire, non_scored_question)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:where)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:new)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use allow and expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .questionnaire_factory method is described. It seems to be a factory method responsible for creating instances of different questionnaire types based on the given type parameter.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Given a Valid Type: This context verifies the behavior when a valid questionnaire type is provided. It expects that calling questionnaire_factory with a valid type results in an instance of the specified questionnaire type (ReviewQuestionnaire).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when given a valid type' do&lt;br /&gt;
      it 'returns an instance of the specified questionnaire type' do&lt;br /&gt;
        questionnaire_type = 'ReviewQuestionnaire'&lt;br /&gt;
        expect(helper.questionnaire_factory(questionnaire_type)).to be_an_instance_of(ReviewQuestionnaire)&lt;br /&gt;
      end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Given an Invalid Type: This context tests how the method handles an invalid questionnaire type. It expects that calling questionnaire_factory with an invalid type sets an error flash message indicating that the questionnaire type is undefined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when given an invalid type' do&lt;br /&gt;
      it 'sets an error flash message' do&lt;br /&gt;
        questionnaire_type = 'UnknownQuestionnaire'&lt;br /&gt;
        expect { helper.questionnaire_factory(questionnaire_type) }.to change { flash[:error] }.from(nil).to('Error: Undefined Questionnaire')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect and change statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== update_questionnaire_questions ===&lt;br /&gt;
We did not implement test cases for this method because it was already covered in last semester by previous team.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb. The #get_intelligent_topic_row_review_bids method is being described. This method likely generates HTML code for displaying topic rows based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a selected_topic, the number of participants (num_participants), and a review_bid.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system. For example, selected_topic is set with a topic ID of 1 and is not waitlisted initially.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Selected Topics are Present: This context tests the behavior of the method when there are selected topics. It sets up a selected topic that is not waitlisted and expects the generated HTML to include a table row with a yellow background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(false)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;yellow&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topic is Waitlisted: This context tests the behavior when the selected topic is waitlisted. It sets up a selected topic that is waitlisted and expects the generated HTML to include a table row with a light gray background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Not Present: This context checks what happens when there are no selected topics. It mocks a method (get_topic_bg_color_review_bids) to return a specific background color and expects the generated HTML to include a table row with that background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are not present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        allow(helper).to receive(:get_topic_bg_color_review_bids).and_return('rgb(255,255,255)')&lt;br /&gt;
        selected_topics = []&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(255,255,255)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Nil: This context tests how the method handles cases where the selected topics parameter is nil. It expects the generated HTML to include a table row with a specific background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are nil' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topics = nil&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(47,352,0)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each context contains an it block with an expectation. These expectations use expect statements to verify that the HTML generated by the method meets certain criteria, such as containing specific table row elements with appropriate background colors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template. The #get_topic_bg_color_review_bids method is being described. This method likely determines the background color for a topic based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* Returns RGB Color Code: This test checks if the method returns an RGB color code for the topic background color. It sets up an expectation that the returned color code matches the pattern rgb(\d+,\d+,\d+), indicating it's in the correct format.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when there are no review bids' do&lt;br /&gt;
      it 'returns default RGB color code' do&lt;br /&gt;
        allow(ReviewBid).to receive(:where).with(signuptopic_id: topic.id).and_return([])&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_topic_bg_color_review_bids(topic, num_participants)).to eq('rgb(47,352,0)')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When There Are No Review Bids: This context tests the behavior when there are no review bids associated with the topic. It sets up an expectation that the method returns a default RGB color code (rgb(47,352,0)), likely indicating a green color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Coverage Results ==&lt;br /&gt;
=== Questionnaire_helper ===&lt;br /&gt;
* Previous coverage: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb&lt;br /&gt;
* Current coverage:&lt;br /&gt;
&lt;br /&gt;
[[File:Questionnaire helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Review bids helper ===&lt;br /&gt;
* Previous coverage: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb&lt;br /&gt;
* Current coverage: &lt;br /&gt;
&lt;br /&gt;
[[File:Review bid helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
This document outlines a thorough plan to enhance testing and code coverage for the questionnaire_helper and review_bids_helper files in Expertiza. With defined objectives, including developing detailed test plans and scenarios, the project aims to address current code coverage gaps. Notably, significant improvements were achieved in the review_bids_helper.rb file, with comprehensive test plans substantially increasing code coverage. Conversely, the questionnaire_helper.rb file saw marginal improvements, primarily due to existing full coverage in the update_questionnaire_questions method. Moving forward, the project will focus on implementing the outlined test plans, ensuring comprehensive testing and reliability for critical functionality across both helper files.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156746</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156746"/>
		<updated>2024-04-24T03:29:09Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Coverage Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .adjust_advice_size method is being described. This method likely adjusts the size of advice related to questions based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a questionnaire, a scored_question, a non_scored_question, and a question_advice.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Question is a ScoredQuestion: This context tests the behavior when the question is a scored question. It sets up expectations related to the adjustment of advice size based on the questionnaire scores.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when question is a ScoredQuestion' do&lt;br /&gt;
      it 'adjusts advice size based on questionnaire scores' do&lt;br /&gt;
    allow(QuestionAdvice).to receive(:where).and_return([])&lt;br /&gt;
    allow(QuestionAdvice).to receive(:new).and_return(double('QuestionAdvice', save: true))&lt;br /&gt;
    allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(true)&lt;br /&gt;
    described_class.adjust_advice_size(questionnaire, scored_question)&lt;br /&gt;
    expect(QuestionAdvice).to have_received(:where).exactly(10).times&lt;br /&gt;
    expect(scored_question.question_advices.size).to eq(10)&lt;br /&gt;
  end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Question is Not a ScoredQuestion: This context tests the behavior when the question is not a scored question. It verifies that in this case, the advice size is not adjusted.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when question is not a ScoredQuestion' do&lt;br /&gt;
      it 'does not adjust advice size' do&lt;br /&gt;
        allow(QuestionAdvice).to receive(:where)&lt;br /&gt;
        allow(QuestionAdvice).to receive(:new)&lt;br /&gt;
        allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(false)&lt;br /&gt;
        described_class.adjust_advice_size(questionnaire, non_scored_question)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:where)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:new)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use allow and expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .questionnaire_factory method is described. It seems to be a factory method responsible for creating instances of different questionnaire types based on the given type parameter.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Given a Valid Type: This context verifies the behavior when a valid questionnaire type is provided. It expects that calling questionnaire_factory with a valid type results in an instance of the specified questionnaire type (ReviewQuestionnaire).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when given a valid type' do&lt;br /&gt;
      it 'returns an instance of the specified questionnaire type' do&lt;br /&gt;
        questionnaire_type = 'ReviewQuestionnaire'&lt;br /&gt;
        expect(helper.questionnaire_factory(questionnaire_type)).to be_an_instance_of(ReviewQuestionnaire)&lt;br /&gt;
      end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Given an Invalid Type: This context tests how the method handles an invalid questionnaire type. It expects that calling questionnaire_factory with an invalid type sets an error flash message indicating that the questionnaire type is undefined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when given an invalid type' do&lt;br /&gt;
      it 'sets an error flash message' do&lt;br /&gt;
        questionnaire_type = 'UnknownQuestionnaire'&lt;br /&gt;
        expect { helper.questionnaire_factory(questionnaire_type) }.to change { flash[:error] }.from(nil).to('Error: Undefined Questionnaire')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect and change statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb. The #get_intelligent_topic_row_review_bids method is being described. This method likely generates HTML code for displaying topic rows based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a selected_topic, the number of participants (num_participants), and a review_bid.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system. For example, selected_topic is set with a topic ID of 1 and is not waitlisted initially.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Selected Topics are Present: This context tests the behavior of the method when there are selected topics. It sets up a selected topic that is not waitlisted and expects the generated HTML to include a table row with a yellow background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(false)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;yellow&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topic is Waitlisted: This context tests the behavior when the selected topic is waitlisted. It sets up a selected topic that is waitlisted and expects the generated HTML to include a table row with a light gray background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Not Present: This context checks what happens when there are no selected topics. It mocks a method (get_topic_bg_color_review_bids) to return a specific background color and expects the generated HTML to include a table row with that background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are not present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        allow(helper).to receive(:get_topic_bg_color_review_bids).and_return('rgb(255,255,255)')&lt;br /&gt;
        selected_topics = []&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(255,255,255)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Nil: This context tests how the method handles cases where the selected topics parameter is nil. It expects the generated HTML to include a table row with a specific background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are nil' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topics = nil&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(47,352,0)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each context contains an it block with an expectation. These expectations use expect statements to verify that the HTML generated by the method meets certain criteria, such as containing specific table row elements with appropriate background colors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template. The #get_topic_bg_color_review_bids method is being described. This method likely determines the background color for a topic based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* Returns RGB Color Code: This test checks if the method returns an RGB color code for the topic background color. It sets up an expectation that the returned color code matches the pattern rgb(\d+,\d+,\d+), indicating it's in the correct format.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when there are no review bids' do&lt;br /&gt;
      it 'returns default RGB color code' do&lt;br /&gt;
        allow(ReviewBid).to receive(:where).with(signuptopic_id: topic.id).and_return([])&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_topic_bg_color_review_bids(topic, num_participants)).to eq('rgb(47,352,0)')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When There Are No Review Bids: This context tests the behavior when there are no review bids associated with the topic. It sets up an expectation that the method returns a default RGB color code (rgb(47,352,0)), likely indicating a green color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Coverage Results ==&lt;br /&gt;
=== Questionnaire_helper ===&lt;br /&gt;
* Previous coverage: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb&lt;br /&gt;
* Current coverage:&lt;br /&gt;
&lt;br /&gt;
[[File:Questionnaire helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Review bids helper ===&lt;br /&gt;
* Previous coverage: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb&lt;br /&gt;
* Current coverage: &lt;br /&gt;
&lt;br /&gt;
[[File:Review bid helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
This document outlines a thorough plan to enhance testing and code coverage for the questionnaire_helper and review_bids_helper files in Expertiza. With defined objectives, including developing detailed test plans and scenarios, the project aims to address current code coverage gaps. Notably, significant improvements were achieved in the review_bids_helper.rb file, with comprehensive test plans substantially increasing code coverage. Conversely, the questionnaire_helper.rb file saw marginal improvements, primarily due to existing full coverage in the update_questionnaire_questions method. Moving forward, the project will focus on implementing the outlined test plans, ensuring comprehensive testing and reliability for critical functionality across both helper files.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156735</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156735"/>
		<updated>2024-04-24T03:27:24Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Questionnaire_helper */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .adjust_advice_size method is being described. This method likely adjusts the size of advice related to questions based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a questionnaire, a scored_question, a non_scored_question, and a question_advice.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Question is a ScoredQuestion: This context tests the behavior when the question is a scored question. It sets up expectations related to the adjustment of advice size based on the questionnaire scores.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when question is a ScoredQuestion' do&lt;br /&gt;
      it 'adjusts advice size based on questionnaire scores' do&lt;br /&gt;
    allow(QuestionAdvice).to receive(:where).and_return([])&lt;br /&gt;
    allow(QuestionAdvice).to receive(:new).and_return(double('QuestionAdvice', save: true))&lt;br /&gt;
    allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(true)&lt;br /&gt;
    described_class.adjust_advice_size(questionnaire, scored_question)&lt;br /&gt;
    expect(QuestionAdvice).to have_received(:where).exactly(10).times&lt;br /&gt;
    expect(scored_question.question_advices.size).to eq(10)&lt;br /&gt;
  end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Question is Not a ScoredQuestion: This context tests the behavior when the question is not a scored question. It verifies that in this case, the advice size is not adjusted.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when question is not a ScoredQuestion' do&lt;br /&gt;
      it 'does not adjust advice size' do&lt;br /&gt;
        allow(QuestionAdvice).to receive(:where)&lt;br /&gt;
        allow(QuestionAdvice).to receive(:new)&lt;br /&gt;
        allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(false)&lt;br /&gt;
        described_class.adjust_advice_size(questionnaire, non_scored_question)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:where)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:new)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use allow and expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .questionnaire_factory method is described. It seems to be a factory method responsible for creating instances of different questionnaire types based on the given type parameter.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Given a Valid Type: This context verifies the behavior when a valid questionnaire type is provided. It expects that calling questionnaire_factory with a valid type results in an instance of the specified questionnaire type (ReviewQuestionnaire).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when given a valid type' do&lt;br /&gt;
      it 'returns an instance of the specified questionnaire type' do&lt;br /&gt;
        questionnaire_type = 'ReviewQuestionnaire'&lt;br /&gt;
        expect(helper.questionnaire_factory(questionnaire_type)).to be_an_instance_of(ReviewQuestionnaire)&lt;br /&gt;
      end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Given an Invalid Type: This context tests how the method handles an invalid questionnaire type. It expects that calling questionnaire_factory with an invalid type sets an error flash message indicating that the questionnaire type is undefined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when given an invalid type' do&lt;br /&gt;
      it 'sets an error flash message' do&lt;br /&gt;
        questionnaire_type = 'UnknownQuestionnaire'&lt;br /&gt;
        expect { helper.questionnaire_factory(questionnaire_type) }.to change { flash[:error] }.from(nil).to('Error: Undefined Questionnaire')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect and change statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb. The #get_intelligent_topic_row_review_bids method is being described. This method likely generates HTML code for displaying topic rows based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a selected_topic, the number of participants (num_participants), and a review_bid.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system. For example, selected_topic is set with a topic ID of 1 and is not waitlisted initially.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Selected Topics are Present: This context tests the behavior of the method when there are selected topics. It sets up a selected topic that is not waitlisted and expects the generated HTML to include a table row with a yellow background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(false)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;yellow&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topic is Waitlisted: This context tests the behavior when the selected topic is waitlisted. It sets up a selected topic that is waitlisted and expects the generated HTML to include a table row with a light gray background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Not Present: This context checks what happens when there are no selected topics. It mocks a method (get_topic_bg_color_review_bids) to return a specific background color and expects the generated HTML to include a table row with that background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are not present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        allow(helper).to receive(:get_topic_bg_color_review_bids).and_return('rgb(255,255,255)')&lt;br /&gt;
        selected_topics = []&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(255,255,255)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Nil: This context tests how the method handles cases where the selected topics parameter is nil. It expects the generated HTML to include a table row with a specific background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are nil' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topics = nil&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(47,352,0)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each context contains an it block with an expectation. These expectations use expect statements to verify that the HTML generated by the method meets certain criteria, such as containing specific table row elements with appropriate background colors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template. The #get_topic_bg_color_review_bids method is being described. This method likely determines the background color for a topic based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* Returns RGB Color Code: This test checks if the method returns an RGB color code for the topic background color. It sets up an expectation that the returned color code matches the pattern rgb(\d+,\d+,\d+), indicating it's in the correct format.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when there are no review bids' do&lt;br /&gt;
      it 'returns default RGB color code' do&lt;br /&gt;
        allow(ReviewBid).to receive(:where).with(signuptopic_id: topic.id).and_return([])&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_topic_bg_color_review_bids(topic, num_participants)).to eq('rgb(47,352,0)')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When There Are No Review Bids: This context tests the behavior when there are no review bids associated with the topic. It sets up an expectation that the method returns a default RGB color code (rgb(47,352,0)), likely indicating a green color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Coverage Results ==&lt;br /&gt;
=== Questionnaire_helper ===&lt;br /&gt;
Previous coverage: &lt;br /&gt;
[[File:Questionnaire helper spec coverage.jpg]]&lt;br /&gt;
=== Review bids helper ===&lt;br /&gt;
Previous coverage:&lt;br /&gt;
[[File:Review bid helper spec coverage.jpg]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
This document outlines a thorough plan to enhance testing and code coverage for the questionnaire_helper and review_bids_helper files in Expertiza. With defined objectives, including developing detailed test plans and scenarios, the project aims to address current code coverage gaps. Notably, significant improvements were achieved in the review_bids_helper.rb file, with comprehensive test plans substantially increasing code coverage. Conversely, the questionnaire_helper.rb file saw marginal improvements, primarily due to existing full coverage in the update_questionnaire_questions method. Moving forward, the project will focus on implementing the outlined test plans, ensuring comprehensive testing and reliability for critical functionality across both helper files.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156729</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156729"/>
		<updated>2024-04-24T03:26:07Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Coverage Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .adjust_advice_size method is being described. This method likely adjusts the size of advice related to questions based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a questionnaire, a scored_question, a non_scored_question, and a question_advice.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Question is a ScoredQuestion: This context tests the behavior when the question is a scored question. It sets up expectations related to the adjustment of advice size based on the questionnaire scores.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when question is a ScoredQuestion' do&lt;br /&gt;
      it 'adjusts advice size based on questionnaire scores' do&lt;br /&gt;
    allow(QuestionAdvice).to receive(:where).and_return([])&lt;br /&gt;
    allow(QuestionAdvice).to receive(:new).and_return(double('QuestionAdvice', save: true))&lt;br /&gt;
    allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(true)&lt;br /&gt;
    described_class.adjust_advice_size(questionnaire, scored_question)&lt;br /&gt;
    expect(QuestionAdvice).to have_received(:where).exactly(10).times&lt;br /&gt;
    expect(scored_question.question_advices.size).to eq(10)&lt;br /&gt;
  end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Question is Not a ScoredQuestion: This context tests the behavior when the question is not a scored question. It verifies that in this case, the advice size is not adjusted.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when question is not a ScoredQuestion' do&lt;br /&gt;
      it 'does not adjust advice size' do&lt;br /&gt;
        allow(QuestionAdvice).to receive(:where)&lt;br /&gt;
        allow(QuestionAdvice).to receive(:new)&lt;br /&gt;
        allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(false)&lt;br /&gt;
        described_class.adjust_advice_size(questionnaire, non_scored_question)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:where)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:new)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use allow and expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .questionnaire_factory method is described. It seems to be a factory method responsible for creating instances of different questionnaire types based on the given type parameter.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Given a Valid Type: This context verifies the behavior when a valid questionnaire type is provided. It expects that calling questionnaire_factory with a valid type results in an instance of the specified questionnaire type (ReviewQuestionnaire).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when given a valid type' do&lt;br /&gt;
      it 'returns an instance of the specified questionnaire type' do&lt;br /&gt;
        questionnaire_type = 'ReviewQuestionnaire'&lt;br /&gt;
        expect(helper.questionnaire_factory(questionnaire_type)).to be_an_instance_of(ReviewQuestionnaire)&lt;br /&gt;
      end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Given an Invalid Type: This context tests how the method handles an invalid questionnaire type. It expects that calling questionnaire_factory with an invalid type sets an error flash message indicating that the questionnaire type is undefined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when given an invalid type' do&lt;br /&gt;
      it 'sets an error flash message' do&lt;br /&gt;
        questionnaire_type = 'UnknownQuestionnaire'&lt;br /&gt;
        expect { helper.questionnaire_factory(questionnaire_type) }.to change { flash[:error] }.from(nil).to('Error: Undefined Questionnaire')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect and change statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb. The #get_intelligent_topic_row_review_bids method is being described. This method likely generates HTML code for displaying topic rows based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a selected_topic, the number of participants (num_participants), and a review_bid.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system. For example, selected_topic is set with a topic ID of 1 and is not waitlisted initially.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Selected Topics are Present: This context tests the behavior of the method when there are selected topics. It sets up a selected topic that is not waitlisted and expects the generated HTML to include a table row with a yellow background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(false)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;yellow&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topic is Waitlisted: This context tests the behavior when the selected topic is waitlisted. It sets up a selected topic that is waitlisted and expects the generated HTML to include a table row with a light gray background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Not Present: This context checks what happens when there are no selected topics. It mocks a method (get_topic_bg_color_review_bids) to return a specific background color and expects the generated HTML to include a table row with that background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are not present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        allow(helper).to receive(:get_topic_bg_color_review_bids).and_return('rgb(255,255,255)')&lt;br /&gt;
        selected_topics = []&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(255,255,255)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Nil: This context tests how the method handles cases where the selected topics parameter is nil. It expects the generated HTML to include a table row with a specific background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are nil' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topics = nil&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(47,352,0)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each context contains an it block with an expectation. These expectations use expect statements to verify that the HTML generated by the method meets certain criteria, such as containing specific table row elements with appropriate background colors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template. The #get_topic_bg_color_review_bids method is being described. This method likely determines the background color for a topic based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* Returns RGB Color Code: This test checks if the method returns an RGB color code for the topic background color. It sets up an expectation that the returned color code matches the pattern rgb(\d+,\d+,\d+), indicating it's in the correct format.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when there are no review bids' do&lt;br /&gt;
      it 'returns default RGB color code' do&lt;br /&gt;
        allow(ReviewBid).to receive(:where).with(signuptopic_id: topic.id).and_return([])&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_topic_bg_color_review_bids(topic, num_participants)).to eq('rgb(47,352,0)')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When There Are No Review Bids: This context tests the behavior when there are no review bids associated with the topic. It sets up an expectation that the method returns a default RGB color code (rgb(47,352,0)), likely indicating a green color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Coverage Results ==&lt;br /&gt;
=== Questionnaire_helper ===&lt;br /&gt;
[[File:Questionnaire helper spec coverage.jpg|thumb|left|Coverage results for questionnaire_helper]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
This document outlines a thorough plan to enhance testing and code coverage for the questionnaire_helper and review_bids_helper files in Expertiza. With defined objectives, including developing detailed test plans and scenarios, the project aims to address current code coverage gaps. Notably, significant improvements were achieved in the review_bids_helper.rb file, with comprehensive test plans substantially increasing code coverage. Conversely, the questionnaire_helper.rb file saw marginal improvements, primarily due to existing full coverage in the update_questionnaire_questions method. Moving forward, the project will focus on implementing the outlined test plans, ensuring comprehensive testing and reliability for critical functionality across both helper files.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Review_bid_helper_spec_coverage.jpg&amp;diff=156724</id>
		<title>File:Review bid helper spec coverage.jpg</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Review_bid_helper_spec_coverage.jpg&amp;diff=156724"/>
		<updated>2024-04-24T03:24:49Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Questionnaire_helper_spec_coverage.jpg&amp;diff=156722</id>
		<title>File:Questionnaire helper spec coverage.jpg</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Questionnaire_helper_spec_coverage.jpg&amp;diff=156722"/>
		<updated>2024-04-24T03:24:12Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156709</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156709"/>
		<updated>2024-04-24T03:21:16Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .adjust_advice_size method is being described. This method likely adjusts the size of advice related to questions based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a questionnaire, a scored_question, a non_scored_question, and a question_advice.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Question is a ScoredQuestion: This context tests the behavior when the question is a scored question. It sets up expectations related to the adjustment of advice size based on the questionnaire scores.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when question is a ScoredQuestion' do&lt;br /&gt;
      it 'adjusts advice size based on questionnaire scores' do&lt;br /&gt;
    allow(QuestionAdvice).to receive(:where).and_return([])&lt;br /&gt;
    allow(QuestionAdvice).to receive(:new).and_return(double('QuestionAdvice', save: true))&lt;br /&gt;
    allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(true)&lt;br /&gt;
    described_class.adjust_advice_size(questionnaire, scored_question)&lt;br /&gt;
    expect(QuestionAdvice).to have_received(:where).exactly(10).times&lt;br /&gt;
    expect(scored_question.question_advices.size).to eq(10)&lt;br /&gt;
  end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Question is Not a ScoredQuestion: This context tests the behavior when the question is not a scored question. It verifies that in this case, the advice size is not adjusted.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when question is not a ScoredQuestion' do&lt;br /&gt;
      it 'does not adjust advice size' do&lt;br /&gt;
        allow(QuestionAdvice).to receive(:where)&lt;br /&gt;
        allow(QuestionAdvice).to receive(:new)&lt;br /&gt;
        allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(false)&lt;br /&gt;
        described_class.adjust_advice_size(questionnaire, non_scored_question)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:where)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:new)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use allow and expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .questionnaire_factory method is described. It seems to be a factory method responsible for creating instances of different questionnaire types based on the given type parameter.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Given a Valid Type: This context verifies the behavior when a valid questionnaire type is provided. It expects that calling questionnaire_factory with a valid type results in an instance of the specified questionnaire type (ReviewQuestionnaire).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when given a valid type' do&lt;br /&gt;
      it 'returns an instance of the specified questionnaire type' do&lt;br /&gt;
        questionnaire_type = 'ReviewQuestionnaire'&lt;br /&gt;
        expect(helper.questionnaire_factory(questionnaire_type)).to be_an_instance_of(ReviewQuestionnaire)&lt;br /&gt;
      end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Given an Invalid Type: This context tests how the method handles an invalid questionnaire type. It expects that calling questionnaire_factory with an invalid type sets an error flash message indicating that the questionnaire type is undefined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when given an invalid type' do&lt;br /&gt;
      it 'sets an error flash message' do&lt;br /&gt;
        questionnaire_type = 'UnknownQuestionnaire'&lt;br /&gt;
        expect { helper.questionnaire_factory(questionnaire_type) }.to change { flash[:error] }.from(nil).to('Error: Undefined Questionnaire')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect and change statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb. The #get_intelligent_topic_row_review_bids method is being described. This method likely generates HTML code for displaying topic rows based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a selected_topic, the number of participants (num_participants), and a review_bid.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system. For example, selected_topic is set with a topic ID of 1 and is not waitlisted initially.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Selected Topics are Present: This context tests the behavior of the method when there are selected topics. It sets up a selected topic that is not waitlisted and expects the generated HTML to include a table row with a yellow background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(false)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;yellow&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topic is Waitlisted: This context tests the behavior when the selected topic is waitlisted. It sets up a selected topic that is waitlisted and expects the generated HTML to include a table row with a light gray background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Not Present: This context checks what happens when there are no selected topics. It mocks a method (get_topic_bg_color_review_bids) to return a specific background color and expects the generated HTML to include a table row with that background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are not present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        allow(helper).to receive(:get_topic_bg_color_review_bids).and_return('rgb(255,255,255)')&lt;br /&gt;
        selected_topics = []&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(255,255,255)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Nil: This context tests how the method handles cases where the selected topics parameter is nil. It expects the generated HTML to include a table row with a specific background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are nil' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topics = nil&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(47,352,0)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each context contains an it block with an expectation. These expectations use expect statements to verify that the HTML generated by the method meets certain criteria, such as containing specific table row elements with appropriate background colors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template. The #get_topic_bg_color_review_bids method is being described. This method likely determines the background color for a topic based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* Returns RGB Color Code: This test checks if the method returns an RGB color code for the topic background color. It sets up an expectation that the returned color code matches the pattern rgb(\d+,\d+,\d+), indicating it's in the correct format.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when there are no review bids' do&lt;br /&gt;
      it 'returns default RGB color code' do&lt;br /&gt;
        allow(ReviewBid).to receive(:where).with(signuptopic_id: topic.id).and_return([])&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_topic_bg_color_review_bids(topic, num_participants)).to eq('rgb(47,352,0)')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When There Are No Review Bids: This context tests the behavior when there are no review bids associated with the topic. It sets up an expectation that the method returns a default RGB color code (rgb(47,352,0)), likely indicating a green color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Coverage Results ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
This document outlines a thorough plan to enhance testing and code coverage for the questionnaire_helper and review_bids_helper files in Expertiza. With defined objectives, including developing detailed test plans and scenarios, the project aims to address current code coverage gaps. Notably, significant improvements were achieved in the review_bids_helper.rb file, with comprehensive test plans substantially increasing code coverage. Conversely, the questionnaire_helper.rb file saw marginal improvements, primarily due to existing full coverage in the update_questionnaire_questions method. Moving forward, the project will focus on implementing the outlined test plans, ensuring comprehensive testing and reliability for critical functionality across both helper files.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156708</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156708"/>
		<updated>2024-04-24T03:20:49Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .adjust_advice_size method is being described. This method likely adjusts the size of advice related to questions based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a questionnaire, a scored_question, a non_scored_question, and a question_advice.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Question is a ScoredQuestion: This context tests the behavior when the question is a scored question. It sets up expectations related to the adjustment of advice size based on the questionnaire scores.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when question is a ScoredQuestion' do&lt;br /&gt;
      it 'adjusts advice size based on questionnaire scores' do&lt;br /&gt;
    allow(QuestionAdvice).to receive(:where).and_return([])&lt;br /&gt;
    allow(QuestionAdvice).to receive(:new).and_return(double('QuestionAdvice', save: true))&lt;br /&gt;
    allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(true)&lt;br /&gt;
    described_class.adjust_advice_size(questionnaire, scored_question)&lt;br /&gt;
    expect(QuestionAdvice).to have_received(:where).exactly(10).times&lt;br /&gt;
    expect(scored_question.question_advices.size).to eq(10)&lt;br /&gt;
  end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Question is Not a ScoredQuestion: This context tests the behavior when the question is not a scored question. It verifies that in this case, the advice size is not adjusted.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when question is not a ScoredQuestion' do&lt;br /&gt;
      it 'does not adjust advice size' do&lt;br /&gt;
        allow(QuestionAdvice).to receive(:where)&lt;br /&gt;
        allow(QuestionAdvice).to receive(:new)&lt;br /&gt;
        allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(false)&lt;br /&gt;
        described_class.adjust_advice_size(questionnaire, non_scored_question)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:where)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:new)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use allow and expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .questionnaire_factory method is described. It seems to be a factory method responsible for creating instances of different questionnaire types based on the given type parameter.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Given a Valid Type: This context verifies the behavior when a valid questionnaire type is provided. It expects that calling questionnaire_factory with a valid type results in an instance of the specified questionnaire type (ReviewQuestionnaire).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when given a valid type' do&lt;br /&gt;
      it 'returns an instance of the specified questionnaire type' do&lt;br /&gt;
        questionnaire_type = 'ReviewQuestionnaire'&lt;br /&gt;
        expect(helper.questionnaire_factory(questionnaire_type)).to be_an_instance_of(ReviewQuestionnaire)&lt;br /&gt;
      end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Given an Invalid Type: This context tests how the method handles an invalid questionnaire type. It expects that calling questionnaire_factory with an invalid type sets an error flash message indicating that the questionnaire type is undefined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when given an invalid type' do&lt;br /&gt;
      it 'sets an error flash message' do&lt;br /&gt;
        questionnaire_type = 'UnknownQuestionnaire'&lt;br /&gt;
        expect { helper.questionnaire_factory(questionnaire_type) }.to change { flash[:error] }.from(nil).to('Error: Undefined Questionnaire')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect and change statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb. The #get_intelligent_topic_row_review_bids method is being described. This method likely generates HTML code for displaying topic rows based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a selected_topic, the number of participants (num_participants), and a review_bid.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system. For example, selected_topic is set with a topic ID of 1 and is not waitlisted initially.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Selected Topics are Present: This context tests the behavior of the method when there are selected topics. It sets up a selected topic that is not waitlisted and expects the generated HTML to include a table row with a yellow background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(false)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;yellow&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topic is Waitlisted: This context tests the behavior when the selected topic is waitlisted. It sets up a selected topic that is waitlisted and expects the generated HTML to include a table row with a light gray background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Not Present: This context checks what happens when there are no selected topics. It mocks a method (get_topic_bg_color_review_bids) to return a specific background color and expects the generated HTML to include a table row with that background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are not present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        allow(helper).to receive(:get_topic_bg_color_review_bids).and_return('rgb(255,255,255)')&lt;br /&gt;
        selected_topics = []&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(255,255,255)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Nil: This context tests how the method handles cases where the selected topics parameter is nil. It expects the generated HTML to include a table row with a specific background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are nil' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topics = nil&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(47,352,0)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each context contains an it block with an expectation. These expectations use expect statements to verify that the HTML generated by the method meets certain criteria, such as containing specific table row elements with appropriate background colors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template. The #get_topic_bg_color_review_bids method is being described. This method likely determines the background color for a topic based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* Returns RGB Color Code: This test checks if the method returns an RGB color code for the topic background color. It sets up an expectation that the returned color code matches the pattern rgb(\d+,\d+,\d+), indicating it's in the correct format.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when there are no review bids' do&lt;br /&gt;
      it 'returns default RGB color code' do&lt;br /&gt;
        allow(ReviewBid).to receive(:where).with(signuptopic_id: topic.id).and_return([])&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_topic_bg_color_review_bids(topic, num_participants)).to eq('rgb(47,352,0)')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When There Are No Review Bids: This context tests the behavior when there are no review bids associated with the topic. It sets up an expectation that the method returns a default RGB color code (rgb(47,352,0)), likely indicating a green color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
This document outlines a thorough plan to enhance testing and code coverage for the questionnaire_helper and review_bids_helper files in Expertiza. With defined objectives, including developing detailed test plans and scenarios, the project aims to address current code coverage gaps. Notably, significant improvements were achieved in the review_bids_helper.rb file, with comprehensive test plans substantially increasing code coverage. Conversely, the questionnaire_helper.rb file saw marginal improvements, primarily due to existing full coverage in the update_questionnaire_questions method. Moving forward, the project will focus on implementing the outlined test plans, ensuring comprehensive testing and reliability for critical functionality across both helper files.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156701</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156701"/>
		<updated>2024-04-24T03:19:29Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 1: Develop code testing scenarios for questionnaire_helper */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .adjust_advice_size method is being described. This method likely adjusts the size of advice related to questions based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a questionnaire, a scored_question, a non_scored_question, and a question_advice.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Question is a ScoredQuestion: This context tests the behavior when the question is a scored question. It sets up expectations related to the adjustment of advice size based on the questionnaire scores.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when question is a ScoredQuestion' do&lt;br /&gt;
      it 'adjusts advice size based on questionnaire scores' do&lt;br /&gt;
    allow(QuestionAdvice).to receive(:where).and_return([])&lt;br /&gt;
    allow(QuestionAdvice).to receive(:new).and_return(double('QuestionAdvice', save: true))&lt;br /&gt;
    allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(true)&lt;br /&gt;
    described_class.adjust_advice_size(questionnaire, scored_question)&lt;br /&gt;
    expect(QuestionAdvice).to have_received(:where).exactly(10).times&lt;br /&gt;
    expect(scored_question.question_advices.size).to eq(10)&lt;br /&gt;
  end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Question is Not a ScoredQuestion: This context tests the behavior when the question is not a scored question. It verifies that in this case, the advice size is not adjusted.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when question is not a ScoredQuestion' do&lt;br /&gt;
      it 'does not adjust advice size' do&lt;br /&gt;
        allow(QuestionAdvice).to receive(:where)&lt;br /&gt;
        allow(QuestionAdvice).to receive(:new)&lt;br /&gt;
        allow(scored_question).to receive(:is_a?).with(ScoredQuestion).and_return(false)&lt;br /&gt;
        described_class.adjust_advice_size(questionnaire, non_scored_question)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:where)&lt;br /&gt;
        expect(QuestionAdvice).not_to have_received(:new)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use allow and expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
&lt;br /&gt;
==== Method Description ====&lt;br /&gt;
The .questionnaire_factory method is described. It seems to be a factory method responsible for creating instances of different questionnaire types based on the given type parameter.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Given a Valid Type: This context verifies the behavior when a valid questionnaire type is provided. It expects that calling questionnaire_factory with a valid type results in an instance of the specified questionnaire type (ReviewQuestionnaire).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when given a valid type' do&lt;br /&gt;
      it 'returns an instance of the specified questionnaire type' do&lt;br /&gt;
        questionnaire_type = 'ReviewQuestionnaire'&lt;br /&gt;
        expect(helper.questionnaire_factory(questionnaire_type)).to be_an_instance_of(ReviewQuestionnaire)&lt;br /&gt;
      end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Given an Invalid Type: This context tests how the method handles an invalid questionnaire type. It expects that calling questionnaire_factory with an invalid type sets an error flash message indicating that the questionnaire type is undefined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when given an invalid type' do&lt;br /&gt;
      it 'sets an error flash message' do&lt;br /&gt;
        questionnaire_type = 'UnknownQuestionnaire'&lt;br /&gt;
        expect { helper.questionnaire_factory(questionnaire_type) }.to change { flash[:error] }.from(nil).to('Error: Undefined Questionnaire')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect and change statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb. The #get_intelligent_topic_row_review_bids method is being described. This method likely generates HTML code for displaying topic rows based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a selected_topic, the number of participants (num_participants), and a review_bid.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system. For example, selected_topic is set with a topic ID of 1 and is not waitlisted initially.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Selected Topics are Present: This context tests the behavior of the method when there are selected topics. It sets up a selected topic that is not waitlisted and expects the generated HTML to include a table row with a yellow background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(false)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;yellow&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topic is Waitlisted: This context tests the behavior when the selected topic is waitlisted. It sets up a selected topic that is waitlisted and expects the generated HTML to include a table row with a light gray background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Not Present: This context checks what happens when there are no selected topics. It mocks a method (get_topic_bg_color_review_bids) to return a specific background color and expects the generated HTML to include a table row with that background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are not present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        allow(helper).to receive(:get_topic_bg_color_review_bids).and_return('rgb(255,255,255)')&lt;br /&gt;
        selected_topics = []&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(255,255,255)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Nil: This context tests how the method handles cases where the selected topics parameter is nil. It expects the generated HTML to include a table row with a specific background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are nil' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topics = nil&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(47,352,0)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each context contains an it block with an expectation. These expectations use expect statements to verify that the HTML generated by the method meets certain criteria, such as containing specific table row elements with appropriate background colors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template. The #get_topic_bg_color_review_bids method is being described. This method likely determines the background color for a topic based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* Returns RGB Color Code: This test checks if the method returns an RGB color code for the topic background color. It sets up an expectation that the returned color code matches the pattern rgb(\d+,\d+,\d+), indicating it's in the correct format.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when there are no review bids' do&lt;br /&gt;
      it 'returns default RGB color code' do&lt;br /&gt;
        allow(ReviewBid).to receive(:where).with(signuptopic_id: topic.id).and_return([])&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_topic_bg_color_review_bids(topic, num_participants)).to eq('rgb(47,352,0)')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When There Are No Review Bids: This context tests the behavior when there are no review bids associated with the topic. It sets up an expectation that the method returns a default RGB color code (rgb(47,352,0)), likely indicating a green color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
The design document outlines a comprehensive plan for the upcoming project focused on enhancing the testing and code coverage for the `questionnaire_helper` and `review_bids_helper` files in Expertiza. With a clear problem statement and objectives defined, the project aims to address the current code coverage gaps by developing thorough test plans and scenarios for both helpers. The document provides an overview of the classes and methods involved. Key objectives include developing code testing scenarios, improving code coverage, and ensuring the reliability and effectiveness of the helpers. Moving forward, the project will involve the implementation of the outlined test plans, execution of test scenarios, and iterative refinement of the codebase to achieve the desired objectives.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156673</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=156673"/>
		<updated>2024-04-24T03:11:25Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 2: Develop code testing scenarios for review_bids_helper */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
We follow the single responsibility principle so that each test only tests&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
&lt;br /&gt;
For testing purposes, we mock the following items:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let(:questionnaire) { double('Questionnaire', max_question_score: 5, min_question_score: 1) }&lt;br /&gt;
let(:question) { double('ScoredQuestion', id: 1) }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
To test this method, we mock a questionnaire and a scored question. We verify that the method correctly adjusts the size of advice for a scored question. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When the question is a ScoredQuestion: Verify that the method adjusts advice size by ensuring the correct number of advices are created, advices outside the range are deleted, and duplicate advices are removed. Test scenarios when the question's score is within the defined range, above the range, and below the range.&lt;br /&gt;
* When the question is not a ScoredQuestion: Verify that the method does not adjust advice size for non-scored questions.&lt;br /&gt;
&lt;br /&gt;
=== update_questionnaire_questions ===&lt;br /&gt;
This method updates attributes of questionnaire questions based on form data. To test this method, we mock a question and its associated form data. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When params contain questions: Verify that the method updates question attributes when parameters contain questions. Test scenarios for updating various attributes of a question.&lt;br /&gt;
* When params do not contain questions: Verify that the method does not update any question attributes when parameters do not contain questions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
This method creates a questionnaire instance based on its type. To test this method, we provide various valid and invalid types. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When a valid type is provided: Verify that the method creates a questionnaire instance for a valid type. Test scenarios for each valid questionnaire type.&lt;br /&gt;
* When an undefined type is provided: Verify that the method sets a flash error message when an undefined questionnaire type is provided.&lt;br /&gt;
&lt;br /&gt;
We have structured the test plan to cover each method comprehensively and ensure that they function correctly under various scenarios.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb. The #get_intelligent_topic_row_review_bids method is being described. This method likely generates HTML code for displaying topic rows based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a selected_topic, the number of participants (num_participants), and a review_bid.&lt;br /&gt;
Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system. For example, selected_topic is set with a topic ID of 1 and is not waitlisted initially.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* When Selected Topics are Present: This context tests the behavior of the method when there are selected topics. It sets up a selected topic that is not waitlisted and expects the generated HTML to include a table row with a yellow background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(false)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;yellow&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topic is Waitlisted: This context tests the behavior when the selected topic is waitlisted. It sets up a selected topic that is waitlisted and expects the generated HTML to include a table row with a light gray background.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Not Present: This context checks what happens when there are no selected topics. It mocks a method (get_topic_bg_color_review_bids) to return a specific background color and expects the generated HTML to include a table row with that background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are not present' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        allow(helper).to receive(:get_topic_bg_color_review_bids).and_return('rgb(255,255,255)')&lt;br /&gt;
        selected_topics = []&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(255,255,255)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When Selected Topics are Nil: This context tests how the method handles cases where the selected topics parameter is nil. It expects the generated HTML to include a table row with a specific background color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topics are nil' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topics = nil&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr id=&amp;quot;topic_1&amp;quot; style=&amp;quot;background-color:rgb(47,352,0)&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each context contains an it block with an expectation. These expectations use expect statements to verify that the HTML generated by the method meets certain criteria, such as containing specific table row elements with appropriate background colors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
==== Objective ====&lt;br /&gt;
Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template. The #get_topic_bg_color_review_bids method is being described. This method likely determines the background color for a topic based on certain conditions.&lt;br /&gt;
&lt;br /&gt;
==== Test Setup ==== &lt;br /&gt;
Mock objects are set up using let statements. These include a topic, a review_bid, and the number of participants (num_participants). Each of these mock objects is prepared with predefined attributes that mimic the behavior of actual objects in the system.&lt;br /&gt;
&lt;br /&gt;
==== Test Contexts ====&lt;br /&gt;
&lt;br /&gt;
* Returns RGB Color Code: This test checks if the method returns an RGB color code for the topic background color. It sets up an expectation that the returned color code matches the pattern rgb(\d+,\d+,\d+), indicating it's in the correct format.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 context 'when there are no review bids' do&lt;br /&gt;
      it 'returns default RGB color code' do&lt;br /&gt;
        allow(ReviewBid).to receive(:where).with(signuptopic_id: topic.id).and_return([])&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_topic_bg_color_review_bids(topic, num_participants)).to eq('rgb(47,352,0)')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When There Are No Review Bids: This context tests the behavior when there are no review bids associated with the topic. It sets up an expectation that the method returns a default RGB color code (rgb(47,352,0)), likely indicating a green color.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
context 'when selected topic is waitlisted' do&lt;br /&gt;
      it 'returns HTML code for topic row with appropriate background color' do&lt;br /&gt;
        selected_topic = instance_double('SelectedTopic')&lt;br /&gt;
        allow(selected_topic).to receive(:topic_id).and_return(1)&lt;br /&gt;
        allow(selected_topic).to receive(:is_waitlisted).and_return(true)&lt;br /&gt;
        selected_topics = [selected_topic]&lt;br /&gt;
&lt;br /&gt;
        expect(helper.get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)).to include('&amp;lt;tr bgcolor=&amp;quot;lightgray&amp;quot;&amp;gt;')&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Expectations ====&lt;br /&gt;
Each test contains an it block with an expectation. These expectations use expect statements to verify the behavior of the method under different conditions.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
The design document outlines a comprehensive plan for the upcoming project focused on enhancing the testing and code coverage for the `questionnaire_helper` and `review_bids_helper` files in Expertiza. With a clear problem statement and objectives defined, the project aims to address the current code coverage gaps by developing thorough test plans and scenarios for both helpers. The document provides an overview of the classes and methods involved. Key objectives include developing code testing scenarios, improving code coverage, and ensuring the reliability and effectiveness of the helpers. Moving forward, the project will involve the implementation of the outlined test plans, execution of test scenarios, and iterative refinement of the codebase to achieve the desired objectives.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=155324</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=155324"/>
		<updated>2024-04-09T00:35:42Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Hamer value calculation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
We follow the single responsibility principle so that each test only tests&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
&lt;br /&gt;
For testing purposes, we mock the following items:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let(:questionnaire) { double('Questionnaire', max_question_score: 5, min_question_score: 1) }&lt;br /&gt;
let(:question) { double('ScoredQuestion', id: 1) }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
To test this method, we mock a questionnaire and a scored question. We verify that the method correctly adjusts the size of advice for a scored question. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When the question is a ScoredQuestion: Verify that the method adjusts advice size by ensuring the correct number of advices are created, advices outside the range are deleted, and duplicate advices are removed. Test scenarios when the question's score is within the defined range, above the range, and below the range.&lt;br /&gt;
* When the question is not a ScoredQuestion: Verify that the method does not adjust advice size for non-scored questions.&lt;br /&gt;
&lt;br /&gt;
=== update_questionnaire_questions ===&lt;br /&gt;
This method updates attributes of questionnaire questions based on form data. To test this method, we mock a question and its associated form data. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When params contain questions: Verify that the method updates question attributes when parameters contain questions. Test scenarios for updating various attributes of a question.&lt;br /&gt;
* When params do not contain questions: Verify that the method does not update any question attributes when parameters do not contain questions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
This method creates a questionnaire instance based on its type. To test this method, we provide various valid and invalid types. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When a valid type is provided: Verify that the method creates a questionnaire instance for a valid type. Test scenarios for each valid questionnaire type.&lt;br /&gt;
* When an undefined type is provided: Verify that the method sets a flash error message when an undefined questionnaire type is provided.&lt;br /&gt;
&lt;br /&gt;
We have structured the test plan to cover each method comprehensively and ensure that they function correctly under various scenarios.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
* Objective: Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb.&lt;br /&gt;
&lt;br /&gt;
* Test Scenario: When called with a topic, selected topics, and the number of participants, the method should generate HTML code for the topic row with appropriate background color based on its status.&lt;br /&gt;
&lt;br /&gt;
* Preconditions: Ensure that the topic, selected topics, and the number of participants are available for rendering. Set up a scenario with a topic and relevant data for selected topics and the number of participants.&lt;br /&gt;
* Test Steps: Provide a topic, selected topics, and the number of participants to the method. Call the get_intelligent_topic_row_review_bids method with the provided parameters.&lt;br /&gt;
* Expected Result: The method should return HTML code for the topic row with appropriate background color based on its status.&lt;br /&gt;
* Pass Criteria: The method returns the correct HTML code for the topic row. The background color of the row is determined correctly based on the topic's status and selection.&lt;br /&gt;
* Fail Criteria: The method returns incorrect HTML code for the topic row. The background color of the row is not determined correctly based on the topic's status and selection.&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
* Objective: Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template.&lt;br /&gt;
* Test Scenario: When called with a topic and the number of participants, the method should calculate the background color based on the number of bids and the number of participants.&lt;br /&gt;
* Preconditions: Ensure that the topic and the number of participants are available for calculation. Set up a scenario with a topic and the number of participants.&lt;br /&gt;
* Test Steps: Provide a topic and the number of participants to the method. Call the get_topic_bg_color_review_bids method with the provided parameters.&lt;br /&gt;
* Expected Result: The method should return a string representing the RGB color code for the topic's background color.&lt;br /&gt;
* Pass Criteria: The method returns the correct RGB color code for the topic's background color. The background color is calculated accurately based on the number of bids and the number of participants.&lt;br /&gt;
* Fail Criteria: The method returns an incorrect RGB color code for the topic's background color. The background color is not calculated accurately based on the number of bids and the number of participants.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
The design document outlines a comprehensive plan for the upcoming project focused on enhancing the testing and code coverage for the `questionnaire_helper` and `review_bids_helper` files in Expertiza. With a clear problem statement and objectives defined, the project aims to address the current code coverage gaps by developing thorough test plans and scenarios for both helpers. The document provides an overview of the classes and methods involved. Key objectives include developing code testing scenarios, improving code coverage, and ensuring the reliability and effectiveness of the helpers. Moving forward, the project will involve the implementation of the outlined test plans, execution of test scenarios, and iterative refinement of the codebase to achieve the desired objectives.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=155320</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=155320"/>
		<updated>2024-04-09T00:34:36Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 2: Develop code testing scenarios for review_bids_helper */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
We follow the single responsibility principle so that each test only tests&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
&lt;br /&gt;
For testing purposes, we mock the following items:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let(:questionnaire) { double('Questionnaire', max_question_score: 5, min_question_score: 1) }&lt;br /&gt;
let(:question) { double('ScoredQuestion', id: 1) }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
To test this method, we mock a questionnaire and a scored question. We verify that the method correctly adjusts the size of advice for a scored question. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When the question is a ScoredQuestion: Verify that the method adjusts advice size by ensuring the correct number of advices are created, advices outside the range are deleted, and duplicate advices are removed. Test scenarios when the question's score is within the defined range, above the range, and below the range.&lt;br /&gt;
* When the question is not a ScoredQuestion: Verify that the method does not adjust advice size for non-scored questions.&lt;br /&gt;
&lt;br /&gt;
=== update_questionnaire_questions ===&lt;br /&gt;
This method updates attributes of questionnaire questions based on form data. To test this method, we mock a question and its associated form data. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When params contain questions: Verify that the method updates question attributes when parameters contain questions. Test scenarios for updating various attributes of a question.&lt;br /&gt;
* When params do not contain questions: Verify that the method does not update any question attributes when parameters do not contain questions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
This method creates a questionnaire instance based on its type. To test this method, we provide various valid and invalid types. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When a valid type is provided: Verify that the method creates a questionnaire instance for a valid type. Test scenarios for each valid questionnaire type.&lt;br /&gt;
* When an undefined type is provided: Verify that the method sets a flash error message when an undefined questionnaire type is provided.&lt;br /&gt;
&lt;br /&gt;
We have structured the test plan to cover each method comprehensively and ensure that they function correctly under various scenarios.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
* Objective: Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb.&lt;br /&gt;
&lt;br /&gt;
* Test Scenario: When called with a topic, selected topics, and the number of participants, the method should generate HTML code for the topic row with appropriate background color based on its status.&lt;br /&gt;
&lt;br /&gt;
* Preconditions: Ensure that the topic, selected topics, and the number of participants are available for rendering. Set up a scenario with a topic and relevant data for selected topics and the number of participants.&lt;br /&gt;
* Test Steps: Provide a topic, selected topics, and the number of participants to the method. Call the get_intelligent_topic_row_review_bids method with the provided parameters.&lt;br /&gt;
* Expected Result: The method should return HTML code for the topic row with appropriate background color based on its status.&lt;br /&gt;
* Pass Criteria: The method returns the correct HTML code for the topic row. The background color of the row is determined correctly based on the topic's status and selection.&lt;br /&gt;
* Fail Criteria: The method returns incorrect HTML code for the topic row. The background color of the row is not determined correctly based on the topic's status and selection.&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
* Objective: Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template.&lt;br /&gt;
* Test Scenario: When called with a topic and the number of participants, the method should calculate the background color based on the number of bids and the number of participants.&lt;br /&gt;
* Preconditions: Ensure that the topic and the number of participants are available for calculation. Set up a scenario with a topic and the number of participants.&lt;br /&gt;
* Test Steps: Provide a topic and the number of participants to the method. Call the get_topic_bg_color_review_bids method with the provided parameters.&lt;br /&gt;
* Expected Result: The method should return a string representing the RGB color code for the topic's background color.&lt;br /&gt;
* Pass Criteria: The method returns the correct RGB color code for the topic's background color. The background color is calculated accurately based on the number of bids and the number of participants.&lt;br /&gt;
* Fail Criteria: The method returns an incorrect RGB color code for the topic's background color. The background color is not calculated accurately based on the number of bids and the number of participants.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
The design document outlines a comprehensive plan for the upcoming project focused on enhancing the testing and code coverage for the `questionnaire_helper` and `review_bids_helper` files in Expertiza. With a clear problem statement and objectives defined, the project aims to address the current code coverage gaps by developing thorough test plans and scenarios for both helpers. The document provides an overview of the classes and methods involved. Key objectives include developing code testing scenarios, improving code coverage, and ensuring the reliability and effectiveness of the helpers. Moving forward, the project will involve the implementation of the outlined test plans, execution of test scenarios, and iterative refinement of the codebase to achieve the desired objectives.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=155318</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=155318"/>
		<updated>2024-04-09T00:33:11Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 2: Develop code testing scenarios for review_bids_helper */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
We follow the single responsibility principle so that each test only tests&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
&lt;br /&gt;
For testing purposes, we mock the following items:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let(:questionnaire) { double('Questionnaire', max_question_score: 5, min_question_score: 1) }&lt;br /&gt;
let(:question) { double('ScoredQuestion', id: 1) }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
To test this method, we mock a questionnaire and a scored question. We verify that the method correctly adjusts the size of advice for a scored question. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When the question is a ScoredQuestion: Verify that the method adjusts advice size by ensuring the correct number of advices are created, advices outside the range are deleted, and duplicate advices are removed. Test scenarios when the question's score is within the defined range, above the range, and below the range.&lt;br /&gt;
* When the question is not a ScoredQuestion: Verify that the method does not adjust advice size for non-scored questions.&lt;br /&gt;
&lt;br /&gt;
=== update_questionnaire_questions ===&lt;br /&gt;
This method updates attributes of questionnaire questions based on form data. To test this method, we mock a question and its associated form data. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When params contain questions: Verify that the method updates question attributes when parameters contain questions. Test scenarios for updating various attributes of a question.&lt;br /&gt;
* When params do not contain questions: Verify that the method does not update any question attributes when parameters do not contain questions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
This method creates a questionnaire instance based on its type. To test this method, we provide various valid and invalid types. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When a valid type is provided: Verify that the method creates a questionnaire instance for a valid type. Test scenarios for each valid questionnaire type.&lt;br /&gt;
* When an undefined type is provided: Verify that the method sets a flash error message when an undefined questionnaire type is provided.&lt;br /&gt;
&lt;br /&gt;
We have structured the test plan to cover each method comprehensively and ensure that they function correctly under various scenarios.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
After reviewing the ReviewBidsHelper module, we identified two methods that require testing:&lt;br /&gt;
&lt;br /&gt;
=== get_intelligent_topic_row_review_bids ===&lt;br /&gt;
* Objective: Verify that the method get_intelligent_topic_row_review_bids correctly renders the topic row for the topics table in review_bids/show.html.erb.&lt;br /&gt;
&lt;br /&gt;
* Test Scenario: When called with a topic, selected topics, and the number of participants, the method should generate HTML code for the topic row with appropriate background color based on its status.&lt;br /&gt;
&lt;br /&gt;
* Preconditions: Ensure that the topic, selected topics, and the number of participants are available for rendering. Set up a scenario with a topic and relevant data for selected topics and the number of participants.&lt;br /&gt;
* Test Steps&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Provide a topic, selected topics, and the number of participants to the method.&lt;br /&gt;
Call the get_intelligent_topic_row_review_bids method with the provided parameters.&lt;br /&gt;
Expected Result:&lt;br /&gt;
The method should return HTML code for the topic row with appropriate background color based on its status.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Pass Criteria: The method returns the correct HTML code for the topic row. The background color of the row is determined correctly based on the topic's status and selection.&lt;br /&gt;
* Fail Criteria: The method returns incorrect HTML code for the topic row. The background color of the row is not determined correctly based on the topic's status and selection.&lt;br /&gt;
&lt;br /&gt;
=== get_topic_bg_color_review_bids ===&lt;br /&gt;
&lt;br /&gt;
* Objective: Verify that the method get_topic_bg_color_review_bids correctly calculates the background color for a topic in the review_bids/show.html.erb template.&lt;br /&gt;
* Test Scenario: When called with a topic and the number of participants, the method should calculate the background color based on the number of bids and the number of participants.&lt;br /&gt;
* Preconditions: Ensure that the topic and the number of participants are available for calculation. Set up a scenario with a topic and the number of participants.&lt;br /&gt;
* Test Steps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Provide a topic and the number of participants to the method.&lt;br /&gt;
Call the get_topic_bg_color_review_bids method with the provided parameters.&lt;br /&gt;
Expected Result:&lt;br /&gt;
The method should return a string representing the RGB color code for the topic's background color.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Pass Criteria: The method returns the correct RGB color code for the topic's background color. The background color is calculated accurately based on the number of bids and the number of participants.&lt;br /&gt;
* Fail Criteria: The method returns an incorrect RGB color code for the topic's background color. The background color is not calculated accurately based on the number of bids and the number of participants.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
The design document outlines a comprehensive plan for the upcoming project focused on enhancing the testing and code coverage for the `questionnaire_helper` and `review_bids_helper` files in Expertiza. With a clear problem statement and objectives defined, the project aims to address the current code coverage gaps by developing thorough test plans and scenarios for both helpers. The document provides an overview of the classes and methods involved. Key objectives include developing code testing scenarios, improving code coverage, and ensuring the reliability and effectiveness of the helpers. Moving forward, the project will involve the implementation of the outlined test plans, execution of test scenarios, and iterative refinement of the codebase to achieve the desired objectives.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=155305</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=155305"/>
		<updated>2024-04-09T00:26:18Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* method 1: adjust_advice_size */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
We follow the single responsibility principle so that each test only tests&lt;br /&gt;
After reviewing the QuestionnaireHelper class, we identified three methods in the class that require testing. These methods are as follows:&lt;br /&gt;
&lt;br /&gt;
* adjust_advice_size&lt;br /&gt;
* update_questionnaire_questions&lt;br /&gt;
* questionnaire_factory&lt;br /&gt;
&lt;br /&gt;
For testing purposes, we mock the following items:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let(:questionnaire) { double('Questionnaire', max_question_score: 5, min_question_score: 1) }&lt;br /&gt;
let(:question) { double('ScoredQuestion', id: 1) }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== adjust_advice_size ===&lt;br /&gt;
To test this method, we mock a questionnaire and a scored question. We verify that the method correctly adjusts the size of advice for a scored question. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When the question is a ScoredQuestion: Verify that the method adjusts advice size by ensuring the correct number of advices are created, advices outside the range are deleted, and duplicate advices are removed. Test scenarios when the question's score is within the defined range, above the range, and below the range.&lt;br /&gt;
* When the question is not a ScoredQuestion: Verify that the method does not adjust advice size for non-scored questions.&lt;br /&gt;
&lt;br /&gt;
=== update_questionnaire_questions ===&lt;br /&gt;
This method updates attributes of questionnaire questions based on form data. To test this method, we mock a question and its associated form data. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When params contain questions: Verify that the method updates question attributes when parameters contain questions. Test scenarios for updating various attributes of a question.&lt;br /&gt;
* When params do not contain questions: Verify that the method does not update any question attributes when parameters do not contain questions.&lt;br /&gt;
&lt;br /&gt;
=== questionnaire_factory ===&lt;br /&gt;
This method creates a questionnaire instance based on its type. To test this method, we provide various valid and invalid types. We identify the following test cases:&lt;br /&gt;
&lt;br /&gt;
* When a valid type is provided: Verify that the method creates a questionnaire instance for a valid type. Test scenarios for each valid questionnaire type.&lt;br /&gt;
* When an undefined type is provided: Verify that the method sets a flash error message when an undefined questionnaire type is provided.&lt;br /&gt;
&lt;br /&gt;
We have structured the test plan to cover each method comprehensively and ensure that they function correctly under various scenarios.&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
The design document outlines a comprehensive plan for the upcoming project focused on enhancing the testing and code coverage for the `questionnaire_helper` and `review_bids_helper` files in Expertiza. With a clear problem statement and objectives defined, the project aims to address the current code coverage gaps by developing thorough test plans and scenarios for both helpers. The document provides an overview of the classes and methods involved. Key objectives include developing code testing scenarios, improving code coverage, and ensuring the reliability and effectiveness of the helpers. Moving forward, the project will involve the implementation of the outlined test plans, execution of test scenarios, and iterative refinement of the codebase to achieve the desired objectives.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=155258</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=155258"/>
		<updated>2024-04-09T00:05:48Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 1: Develop code testing scenarios for questionnaire_helper */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
We follow the single responsibility principle so that each test only tests&lt;br /&gt;
=== method 1: adjust_advice_size ===&lt;br /&gt;
* when the question is a ScoredQuestion: Tests that the method adjusts the size of advice for a ScoredQuestion by ensuring that the correct number of advices is created, advice outside the range is deleted, and duplicate advices are removed.&lt;br /&gt;
* when&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
The design document outlines a comprehensive plan for the upcoming project focused on enhancing the testing and code coverage for the `questionnaire_helper` and `review_bids_helper` files in Expertiza. With a clear problem statement and objectives defined, the project aims to address the current code coverage gaps by developing thorough test plans and scenarios for both helpers. The document provides an overview of the classes and methods involved. Key objectives include developing code testing scenarios, improving code coverage, and ensuring the reliability and effectiveness of the helpers. Moving forward, the project will involve the implementation of the outlined test plans, execution of test scenarios, and iterative refinement of the codebase to achieve the desired objectives.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154996</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154996"/>
		<updated>2024-04-08T17:04:51Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
The design document outlines a comprehensive plan for the upcoming project focused on enhancing the testing and code coverage for the `questionnaire_helper` and `review_bids_helper` files in Expertiza. With a clear problem statement and objectives defined, the project aims to address the current code coverage gaps by developing thorough test plans and scenarios for both helpers. The document provides an overview of the classes and methods involved. Key objectives include developing code testing scenarios, improving code coverage, and ensuring the reliability and effectiveness of the helpers. Moving forward, the project will involve the implementation of the outlined test plans, execution of test scenarios, and iterative refinement of the codebase to achieve the desired objectives.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154995</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154995"/>
		<updated>2024-04-08T17:04:27Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
The design document outlines a comprehensive plan for the upcoming project focused on enhancing the testing and code coverage for the `questionnaire_helper` and `review_bids_helper` files in Expertiza. With a clear problem statement and objectives defined, the project aims to address the current code coverage gaps by developing thorough test plans and scenarios for both helpers. The document provides an overview of the classes and methods involved. Key objectives include developing code testing scenarios, improving code coverage, and ensuring the reliability and effectiveness of the helpers. Moving forward, the project will involve the implementation of the outlined test plans, execution of test scenarios, and iterative refinement of the codebase to achieve the desired objectives.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154994</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154994"/>
		<updated>2024-04-08T16:59:25Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
It was established that the original reputation web server was implemented incorrectly.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154992</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154992"/>
		<updated>2024-04-08T16:57:32Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 3: Validate the accuracy of the newly implemented Hamer algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
It was established that the original reputation web server was implemented incorrectly.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154991</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154991"/>
		<updated>2024-04-08T16:55:30Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Develop code testing scenarios for review_bids_helper ==&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
The observed results indicate a tendency towards lower values, primarily due to our decision to include nil values and treat them as zeros in our analysis. This treatment has led to a skew in the scores, favoring lower values and potentially impacting the accuracy of our findings. To address this issue and improve the robustness of our analysis, it is advisable to explore alternative approaches such as using median or random values instead of treating nil values as zeros. However, we must also carefully consider how to handle incomplete reviews that contain nil values in our input dataset, as this can significantly influence the overall integrity and reliability of our results and conclusions.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
It was established that the original reputation web server was implemented incorrectly.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154990</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154990"/>
		<updated>2024-04-08T16:54:49Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 1: Develop code testing scenarios */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios for questionnaire_helper ==&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As established before, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
The observed results indicate a tendency towards lower values, primarily due to our decision to include nil values and treat them as zeros in our analysis. This treatment has led to a skew in the scores, favoring lower values and potentially impacting the accuracy of our findings. To address this issue and improve the robustness of our analysis, it is advisable to explore alternative approaches such as using median or random values instead of treating nil values as zeros. However, we must also carefully consider how to handle incomplete reviews that contain nil values in our input dataset, as this can significantly influence the overall integrity and reliability of our results and conclusions.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
It was established that the original reputation web server was implemented incorrectly.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154989</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154989"/>
		<updated>2024-04-08T16:48:34Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Problem Statement */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
Our project involves writing test cases for the `questionnaire_helper` and `review_bids_helper` files in Expertiza, an open-source assignment/project management portal built on the Ruby on Rails framework. This platform facilitates collaborative learning and feedback among both instructors and students. Instructors can create and customize assignments, define topics for student sign-ups, and manage teams for various projects. Students, on the other hand, can sign up for topics, form teams, and participate in peer reviews to enhance each other's learning experiences. Our goal is to develop comprehensive test plans and increase code coverage for these helper files to ensure their reliability and effectiveness in the Expertiza platform.&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As established before, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
The observed results indicate a tendency towards lower values, primarily due to our decision to include nil values and treat them as zeros in our analysis. This treatment has led to a skew in the scores, favoring lower values and potentially impacting the accuracy of our findings. To address this issue and improve the robustness of our analysis, it is advisable to explore alternative approaches such as using median or random values instead of treating nil values as zeros. However, we must also carefully consider how to handle incomplete reviews that contain nil values in our input dataset, as this can significantly influence the overall integrity and reliability of our results and conclusions.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
It was established that the original reputation web server was implemented incorrectly.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154988</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154988"/>
		<updated>2024-04-08T16:34:27Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Project Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
=== Current Code Coverage ===&lt;br /&gt;
* Questionnaire_helper: 66%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Fquestionnaire_helper.rb &lt;br /&gt;
* Review_bids_helper: 17.65%: https://coveralls.io/builds/66490075/source?filename=app%2Fhelpers%2Freview_bids_helper.rb &lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop test plans/scenarios for questionnaire_helper.rb&lt;br /&gt;
* Develop test plans/scenarios for review_bids_helper.rb&lt;br /&gt;
* Improve code coverage for questionnaire_helper.rb&lt;br /&gt;
* Improve code coverage for review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As established before, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
The observed results indicate a tendency towards lower values, primarily due to our decision to include nil values and treat them as zeros in our analysis. This treatment has led to a skew in the scores, favoring lower values and potentially impacting the accuracy of our findings. To address this issue and improve the robustness of our analysis, it is advisable to explore alternative approaches such as using median or random values instead of treating nil values as zeros. However, we must also carefully consider how to handle incomplete reviews that contain nil values in our input dataset, as this can significantly influence the overall integrity and reliability of our results and conclusions.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
It was established that the original reputation web server was implemented incorrectly.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154987</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154987"/>
		<updated>2024-04-08T16:30:25Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Hamer Algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* bbnn&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Class and Method Overview ==&lt;br /&gt;
&lt;br /&gt;
=== QuestionnaireHelper ===&lt;br /&gt;
&lt;br /&gt;
The QuestionnaireHelper module contains several methods to assist with managing questionnaires in expertiza. QuestionnaireHelper provides methods for adjusting advice size, updating questionnaire questions, and creating questionnaire instances based on types. It also defines constants to facilitate these functionalities. These methods are likely used within the application to handle questionnaire-related tasks efficiently. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
CSV_QUESTION, CSV_TYPE, CSV_PARAM, CSV_WEIGHT&lt;br /&gt;
   - These constants define indices for specific columns in a CSV file.&lt;br /&gt;
&lt;br /&gt;
QUESTIONNAIRE_MAP&lt;br /&gt;
   - This constant is a hash that maps questionnaire types to their respective questionnaire classes.&lt;br /&gt;
   - It's used by the `questionnaire_factory` method to determine the appropriate class to instantiate.&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. adjust_advice_size(questionnaire, question)&lt;br /&gt;
   - This method adjusts the size of advice associated with a given question in a questionnaire.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `questionnaire`: The questionnaire object.&lt;br /&gt;
     - `question`: The question object whose advice size needs adjustment.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks if the question is a `ScoredQuestion`.&lt;br /&gt;
     - Deletes any existing advice for the question outside the score range.&lt;br /&gt;
     - Iterates over the score range and ensures each score has an associated advice.&lt;br /&gt;
     - Deletes any duplicate advice records.&lt;br /&gt;
   &lt;br /&gt;
2. update_questionnaire_questions&lt;br /&gt;
   - This method updates attributes of questionnaire questions based on form data, without modifying unchanged attributes.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Checks for presence of `params[:question]`.&lt;br /&gt;
     - Iterates through each question and its attributes in the parameters.&lt;br /&gt;
     - Compares each attribute's current value with the new value from the parameters and updates if changed.&lt;br /&gt;
     - Saves the question.&lt;br /&gt;
&lt;br /&gt;
3. questionnaire_factory(type)&lt;br /&gt;
   - This method acts as a factory to create an appropriate questionnaire object based on the type provided.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `type`: The type of questionnaire.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Retrieves the questionnaire class from `QUESTIONNAIRE_MAP` based on the provided type.&lt;br /&gt;
     - If the type is not found in the map, it sets an error flash message.&lt;br /&gt;
     - Otherwise, it initializes a new instance of the corresponding questionnaire class and returns it.&lt;br /&gt;
&lt;br /&gt;
=== ReviewBidsHelper ===&lt;br /&gt;
&lt;br /&gt;
This Ruby module, `ReviewBidsHelper` serves as a helper module for views related to reviewing bids in expertiza. `ReviewBidsHelper` provides helper methods for rendering topic rows and determining the background color for topics based on their bid status and the number of participants. These methods are likely used in the views associated with reviewing bids in the application. Let's break down the class and its methods:&lt;br /&gt;
&lt;br /&gt;
==== Methods ====&lt;br /&gt;
&lt;br /&gt;
1. get_intelligent_topic_row_review_bids(topic, selected_topics, num_participants)&lt;br /&gt;
   - This method seems to be responsible for generating HTML markup for a row in a table displaying topics for review bids.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents a specific topic being reviewed.&lt;br /&gt;
     - `selected_topics`: An array of topics that have been selected.&lt;br /&gt;
     - `num_participants`: The number of participants involved in the review process.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Iterates through the `selected_topics`.&lt;br /&gt;
     - Depending on the conditions (whether the topic is selected and not waitlisted, or selected and waitlisted), it generates a specific row HTML.&lt;br /&gt;
     - Returns the generated row HTML as safe HTML.&lt;br /&gt;
&lt;br /&gt;
2. get_topic_bg_color_review_bids(topic, num_participants)&lt;br /&gt;
   - This method calculates the background color for a topic based on the number of participants and the number of bids for that topic.&lt;br /&gt;
   - Parameters:&lt;br /&gt;
     - `topic`: Represents the topic for which the background color is being determined.&lt;br /&gt;
     - `num_participants`: The total number of participants.&lt;br /&gt;
   - Functionality:&lt;br /&gt;
     - Calculates the number of bids for the given `topic`.&lt;br /&gt;
     - Determines the proportion of bids compared to the total number of participants and adjusts the color accordingly.&lt;br /&gt;
     - Returns a string representing the RGB value of the background color.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As established before, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
The observed results indicate a tendency towards lower values, primarily due to our decision to include nil values and treat them as zeros in our analysis. This treatment has led to a skew in the scores, favoring lower values and potentially impacting the accuracy of our findings. To address this issue and improve the robustness of our analysis, it is advisable to explore alternative approaches such as using median or random values instead of treating nil values as zeros. However, we must also carefully consider how to handle incomplete reviews that contain nil values in our input dataset, as this can significantly influence the overall integrity and reliability of our results and conclusions.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
It was established that the original reputation web server was implemented incorrectly.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154986</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154986"/>
		<updated>2024-04-08T16:15:38Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Files Involved */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* bbnn&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* Review_bids_helper.rb: /app/helpers/review_bids_helper.rb&lt;br /&gt;
* Questionnaire_helper.rb: /app/helpers/questionnaire_helper.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As established before, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
The observed results indicate a tendency towards lower values, primarily due to our decision to include nil values and treat them as zeros in our analysis. This treatment has led to a skew in the scores, favoring lower values and potentially impacting the accuracy of our findings. To address this issue and improve the robustness of our analysis, it is advisable to explore alternative approaches such as using median or random values instead of treating nil values as zeros. However, we must also carefully consider how to handle incomplete reviews that contain nil values in our input dataset, as this can significantly influence the overall integrity and reliability of our results and conclusions.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
It was established that the original reputation web server was implemented incorrectly.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154984</id>
		<title>CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire helper, review bids helper</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2440_Testing_for_questionnaire_helper,_review_bids_helper&amp;diff=154984"/>
		<updated>2024-04-08T16:05:25Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: Created page with &amp;quot;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper  == Project Overview ==  === Problem Statement ===    === Objectives ===  * bbnn  === Files Involved ===  * reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb * test file: /spec/controllers/reputation_mock_web_server_hamer.rb  === Mentor ===  * Muhammet Mustafa Olmez (molmez@ncsu.edu)  === Team Members ===  * Neha Vijay Patil (n...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 E2440. Testing for questionnaire_helper, review_bids_helper&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* bbnn&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As established before, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
The observed results indicate a tendency towards lower values, primarily due to our decision to include nil values and treat them as zeros in our analysis. This treatment has led to a skew in the scores, favoring lower values and potentially impacting the accuracy of our findings. To address this issue and improve the robustness of our analysis, it is advisable to explore alternative approaches such as using median or random values instead of treating nil values as zeros. However, we must also carefully consider how to handle incomplete reviews that contain nil values in our input dataset, as this can significantly influence the overall integrity and reliability of our results and conclusions.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
It was established that the original reputation web server was implemented incorrectly.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024&amp;diff=154982</id>
		<title>CSC/ECE 517 Spring 2024</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024&amp;diff=154982"/>
		<updated>2024-04-08T16:03:51Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [[CSC/ECE 517 Spring 2024 - E2401 Implementing and testing import &amp;amp; export controllers]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2405 Refactor review_mapping_helper.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2407 Refactor review_mapping_controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2428 Replicate Roles and Institution UIs ReactJS]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2429 Reimplement student_task list]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2430 Reimplement student_task view]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2410. View for Results of Bidding ]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2414 Grading Audit Trail]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - NTNX-1 : Extend NDB Operator to Support Postgres HA]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - ‬NTNX-2‬‭ : Snapshot Functionality for provisioned databases]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2411 : Fix &amp;quot;Back&amp;quot; link on “New Late Policy” page]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2416.  Reimplement the Question hierarchy]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2420. Reimplement student_quizzes_controller]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2424. Reimplement the Bookmarks Controller]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2426. Create a UI for Assignment Edit page &amp;quot;Etc&amp;quot; tab in ReactJS]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2417. Reimplement submitted content controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2425. Create a Courses user interface in ReactJS]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2431. Reimplement  grades/view_team]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2404 Refactor student teams functionality]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2406 Refactor review_mapping_helper.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2403 Mentor-Meeting Management]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2421. Reimplement impersonating users (within impersonate controller.rb)]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2413. Testing - Answer Tagging]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2427. UI for questionnaire.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2419. Reimplement duties controller.rb and badges controller.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2432. UI for Participants.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - G2402 Implement REST client, REST API, and Graphql API endpoint for repositories]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - G2400 DevOp for GitHub Miner app]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2439 Testing for view_translation_substitutor.rb]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2443 Reimplement grades_controller]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2434 Reimplement Frontend for the Grades view]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2435 Implement Frontend for the My Profile]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2446 Implement Front End for Student Task List]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2444 Implement Frontend for the Review]]&lt;br /&gt;
* [[CSC/ECE 517 Spring 2024 - E2440 Testing for questionnaire_helper, review_bids_helper]]&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154666</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154666"/>
		<updated>2024-03-25T04:34:04Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* GitHub Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As established before, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
The observed results indicate a tendency towards lower values, primarily due to our decision to include nil values and treat them as zeros in our analysis. This treatment has led to a skew in the scores, favoring lower values and potentially impacting the accuracy of our findings. To address this issue and improve the robustness of our analysis, it is advisable to explore alternative approaches such as using median or random values instead of treating nil values as zeros. However, we must also carefully consider how to handle incomplete reviews that contain nil values in our input dataset, as this can significantly influence the overall integrity and reliability of our results and conclusions.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
It was established that the original reputation web server was implemented incorrectly.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154665</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154665"/>
		<updated>2024-03-25T04:32:46Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As established before, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
The observed results indicate a tendency towards lower values, primarily due to our decision to include nil values and treat them as zeros in our analysis. This treatment has led to a skew in the scores, favoring lower values and potentially impacting the accuracy of our findings. To address this issue and improve the robustness of our analysis, it is advisable to explore alternative approaches such as using median or random values instead of treating nil values as zeros. However, we must also carefully consider how to handle incomplete reviews that contain nil values in our input dataset, as this can significantly influence the overall integrity and reliability of our results and conclusions.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
It was established that the original reputation web server was implemented incorrectly.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154664</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154664"/>
		<updated>2024-03-25T04:29:51Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As established before, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
As we can see the results are skewed towards lower values. This is because we added the inclusion of nil values and assumed them to be zero. Because of this assumption, the scores are skewed towards lower values. It must be considered median/random to have lesser skew.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
It was established that the original reputation web server was implemented incorrectly.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154663</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154663"/>
		<updated>2024-03-25T04:28:05Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As established before, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
As we can see the results are skewed towards lower values. This is because we added the inclusion of nil values and assumed them to be zero. Because of this assumption, the scores are skewed towards lower values. It must be considered median/random to have lesser skew.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
Upon testing the original reputation web server's Hamer values, we found discrepancies between the expected values and the values returned by the web service. This led us to conclude that the webservice implementation was incorrect.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154662</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154662"/>
		<updated>2024-03-25T04:27:39Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Code Snippet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As seen above, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
As we can see the results are skewed towards lower values. This is because we added the inclusion of nil values and assumed them to be zero. Because of this assumption, the scores are skewed towards lower values. It must be considered median/random to have lesser skew.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
Upon testing the original reputation web server's Hamer values, we found discrepancies between the expected values and the values returned by the web service. This led us to conclude that the webservice implementation was incorrect.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154661</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154661"/>
		<updated>2024-03-25T04:26:12Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 4: Validate the accuracy of the newly implemented Hamer algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As seen above, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'json'&lt;br /&gt;
require 'uri'&lt;br /&gt;
require 'net/http'&lt;br /&gt;
require 'openssl'&lt;br /&gt;
require 'base64'&lt;br /&gt;
&lt;br /&gt;
# Expertiza allows student work to be peer-reviewed, since peers can provide&lt;br /&gt;
# more feedback than the instructor can.&lt;br /&gt;
# However, if we want to assure that all students receive competent feedback,&lt;br /&gt;
# or even use peer-assigned grades,&lt;br /&gt;
# we need a way to judge which peer reviewers are most credible. The solution&lt;br /&gt;
# is the reputation system.&lt;br /&gt;
# Reputation systems have been deployed as web services, peer-review&lt;br /&gt;
# researchers will be able to use them to calculate scores on assignments,&lt;br /&gt;
# both past and present (past data can be used to tune the algorithms).&lt;br /&gt;
#&lt;br /&gt;
# This file is the controller to calculate the reputation scores.&lt;br /&gt;
# A 'reputation' measures how close a reviewer's scores are to other reviewers'&lt;br /&gt;
# scores.&lt;br /&gt;
# This controller implements the calculation of reputation scores.&lt;br /&gt;
class ReputationWebServiceController &amp;lt; ApplicationController&lt;br /&gt;
  include AuthorizationHelper&lt;br /&gt;
&lt;br /&gt;
  # Method: action_allowed&lt;br /&gt;
  # This method checks if the currently authenticated user has the authorization&lt;br /&gt;
  # to perform certain actions&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   true if the user has privileges to perform the action else returns false&lt;br /&gt;
  def action_allowed?&lt;br /&gt;
    current_user_has_ta_privileges?&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_max_question_score&lt;br /&gt;
  # This method receives a set of answers and gets the maximum question score&lt;br /&gt;
  # Params&lt;br /&gt;
  #   answers: set of answers&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   if no error returns max_question_score of first question else 1&lt;br /&gt;
  def get_max_question_score(answers)&lt;br /&gt;
    begin&lt;br /&gt;
      answers.first.question.questionnaire.max_question_score&lt;br /&gt;
    rescue StandardError&lt;br /&gt;
      1&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_valid_answers_for_response&lt;br /&gt;
  # This method receives response and filters the valid answers list of the&lt;br /&gt;
  # response ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   set of valid answers (returns nil if empty)&lt;br /&gt;
  def get_valid_answers_for_response(response)&lt;br /&gt;
    answers = Answer.where(response_id: response.id)&lt;br /&gt;
    valid_answer = answers.select { |answer| (answer.question.type == 'Criterion') &amp;amp;&amp;amp; !answer.answer.nil? }&lt;br /&gt;
    valid_answer.empty? ? nil : valid_answer&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: calculate_peer_review_grade&lt;br /&gt;
  # This method calculates a cumulative review grade with respect to the set of valid answers&lt;br /&gt;
  # Params&lt;br /&gt;
  #   valid_answer: valid answer to get weight of the answer's question&lt;br /&gt;
  #   max_question_score: used to calculate maximum score for peer review grade&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grade&lt;br /&gt;
  def calculate_peer_review_grade(valid_answer, max_question_score)&lt;br /&gt;
    weighted_score_sum = valid_answer.map { |answer| answer.answer * answer.question.weight }.inject(:+)&lt;br /&gt;
    question_weight_sum = valid_answer.sum { |answer| answer.question.weight }&lt;br /&gt;
    peer_review_grade = 100.0 * weighted_score_sum / (question_weight_sum * max_question_score)&lt;br /&gt;
    peer_review_grade.round(4)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews_for_responses&lt;br /&gt;
  # This method calculates the peer review grade for each valid response&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reviewer_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   team_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   valid_response: to get the valid answer for each valid response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grades_list&lt;br /&gt;
  def get_peer_reviews_for_responses(reviewer_id, team_id, valid_response)&lt;br /&gt;
    peer_review_grades_list = []&lt;br /&gt;
    valid_response.each do |response|&lt;br /&gt;
      valid_answer = get_valid_answers_for_response(response)&lt;br /&gt;
      next if valid_answer.nil?&lt;br /&gt;
&lt;br /&gt;
      review_grade = calculate_peer_review_grade(valid_answer, get_max_question_score(valid_answer))&lt;br /&gt;
      peer_review_grades_list &amp;lt;&amp;lt; [reviewer_id, team_id, review_grade]&lt;br /&gt;
    end&lt;br /&gt;
    peer_review_grades_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews&lt;br /&gt;
  # This method retrieves all the reviews for the submissions&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: used to retrieve response map&lt;br /&gt;
  #   round_num: used to retrieve round_num for the valid response&lt;br /&gt;
  #   has_topic: to get the topic condition&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which corresponds to the return of&lt;br /&gt;
  #     get_peer_reviews_for_responses method and appended to the raw_data_array&lt;br /&gt;
  def get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    ReviewResponseMap.where('reviewed_object_id in (?) and calibrate_to = ?', assignment_id_list, false).each do |response_map|&lt;br /&gt;
      reviewer = response_map.reviewer.user&lt;br /&gt;
      team = AssignmentTeam.find(response_map.reviewee_id)&lt;br /&gt;
      topic_condition = ((has_topic &amp;amp;&amp;amp; (SignedUpTeam.where(team_id: team.id).first.is_waitlisted == false)) || !has_topic)&lt;br /&gt;
      last_valid_response = response_map.response.select { |r| r.round == round_num }.max&lt;br /&gt;
      valid_response = [last_valid_response] unless last_valid_response.nil?&lt;br /&gt;
      if (topic_condition == true) &amp;amp;&amp;amp; !valid_response.nil? &amp;amp;&amp;amp; !valid_response.empty?&lt;br /&gt;
        raw_data_array += get_peer_reviews_for_responses(reviewer.id, team.id, valid_response)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_ids_list&lt;br /&gt;
  # This method maps each object to the corresponding object's ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   tables: any table&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   id in the tables&lt;br /&gt;
  def get_ids_list(tables)&lt;br /&gt;
    tables.map(&amp;amp;:id)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_scores&lt;br /&gt;
  # This method gets the quiz score of each participant for respective reviewee&lt;br /&gt;
  # Params&lt;br /&gt;
  #   team_ids: list of team IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which is a list of participant, reviewee and the participant's quiz score&lt;br /&gt;
  def get_scores(team_ids)&lt;br /&gt;
    quiz_questionnnaires = QuizQuestionnaire.where('instructor_id in (?)', team_ids)&lt;br /&gt;
    quiz_questionnnaire_ids = get_ids_list(quiz_questionnnaires)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    QuizResponseMap.where('reviewed_object_id in (?)', quiz_questionnnaire_ids).each do |response_map|&lt;br /&gt;
      quiz_score = response_map.quiz_score&lt;br /&gt;
      participant = Participant.find(response_map.reviewer_id)&lt;br /&gt;
      raw_data_array &amp;lt;&amp;lt; [participant.user_id, response_map.reviewee_id, quiz_score]&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_quiz_score&lt;br /&gt;
  # This method gets the quiz score of assignments&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: returned by get_scores method, which is a list of participant,&lt;br /&gt;
  #     reviewee and the participant's quiz score&lt;br /&gt;
  def get_quiz_score(assignment_id_list)&lt;br /&gt;
    teams = AssignmentTeam.where('parent_id in (?)', assignment_id_list)&lt;br /&gt;
    team_ids = get_ids_list(teams)&lt;br /&gt;
    get_scores(team_ids)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_body&lt;br /&gt;
  # This method generates json body for the peer reviews and quiz scores&lt;br /&gt;
  # Params&lt;br /&gt;
  #   results: list of grades with corresponding team/participant ID,&lt;br /&gt;
  #     reviewee ID and their score&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: returns the formatted body after sorting the hash&lt;br /&gt;
  def generate_json_body(results)&lt;br /&gt;
    request_body = {}&lt;br /&gt;
    results.each_with_index do |record, _index|&lt;br /&gt;
      request_body['submission' + record[1].to_s] = {} unless request_body.key?('submission' + record[1].to_s)&lt;br /&gt;
      request_body['submission' + record[1].to_s]['stu' + record[0].to_s] = record[2]&lt;br /&gt;
    end&lt;br /&gt;
    # sort the 2-dimension hash&lt;br /&gt;
    request_body.each { |k, v| request_body[k] = v.sort.to_h }&lt;br /&gt;
    request_body.sort.to_h&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_peer_reviews&lt;br /&gt;
  # This method retrieves all the peer reviews associated with&lt;br /&gt;
  # the assignment id list by calling the get_peer_reviews method.&lt;br /&gt;
  # It then formats the peer-review list in JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  #   round_num: round number of the review&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with the formatted peer review data.&lt;br /&gt;
  def generate_json_for_peer_reviews(assignment_id_list, round_num = 2)&lt;br /&gt;
    has_topic = !SignUpTopic.where(assignment_id: assignment_id_list[0]).empty?&lt;br /&gt;
&lt;br /&gt;
    peer_reviews_list = get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    request_body = generate_json_body(peer_reviews_list)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_quiz_scores&lt;br /&gt;
  # This method accepts a list of assignment ids as an argument.&lt;br /&gt;
  # It then calls the get_quiz_score method on the list to get&lt;br /&gt;
  # maps of teams and scores for the given assignments.&lt;br /&gt;
  # The map is then formatted into JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with quiz scores&lt;br /&gt;
  def generate_json_for_quiz_scores(assignment_id_list)&lt;br /&gt;
    participant_reviewee_map = get_quiz_score(assignment_id_list)&lt;br /&gt;
    request_body = generate_json_body(participant_reviewee_map)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: client&lt;br /&gt;
  # This method is called when the url reputation_web_service/client&lt;br /&gt;
  # is hit using GET method.&lt;br /&gt;
  # This renders the client.html.erb&lt;br /&gt;
  # It also populates the instance variables to be used in the views&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def client&lt;br /&gt;
    @max_assignment_id = Assignment.last.id&lt;br /&gt;
    @assignment = Assignment.find(flash[:assignment_id]) rescue nil&lt;br /&gt;
    @another_assignment = Assignment.find(flash[:another_assignment_id]) rescue nil&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: update_participants_reputation&lt;br /&gt;
  # This method accepts the response body in the JSON format.&lt;br /&gt;
  # It then parses the JSON and updates the reputation scores of the&lt;br /&gt;
  # participants in the list.&lt;br /&gt;
  # If the alg variable is not  Hamer/ Lauv, the updation step is skipped.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def update_participants_reputation(reputation_response)&lt;br /&gt;
    JSON.parse(reputation_response.body.to_s).each do |reputation_algorithm, user_resputation_list|&lt;br /&gt;
      next unless %w[Hamer Lauw].include?(reputation_algorithm)&lt;br /&gt;
&lt;br /&gt;
      user_resputation_list.each do |user_id, reputation|&lt;br /&gt;
        Participant.find_by(user_id: user_id).update(reputation_algorithm.to_sym =&amp;gt; reputation) unless /leniency/ =~ user_id.to_s&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: process_response_body&lt;br /&gt;
  # This method gets the control after receiving a response from the server.&lt;br /&gt;
  # It receives the response body as an argument&lt;br /&gt;
  # It updates the instance variables related to the response.&lt;br /&gt;
  # It then calls the update_participants_reputation to update the reputation&lt;br /&gt;
  # scores received in the response body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def process_response_body(reputation_response)&lt;br /&gt;
    flash[:response] = reputation_response&lt;br /&gt;
    flash[:response_body] = reputation_response.body&lt;br /&gt;
    update_participants_reputation(reputation_response)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_expert_grades&lt;br /&gt;
  # It prepends the request body with the expert grades pertaining&lt;br /&gt;
  # to the default wiki contribution case of 754.&lt;br /&gt;
  # It receives the request body as an argument and prepends it&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_expert_grades(body)&lt;br /&gt;
    flash[:additional_info] = 'add expert grades'&lt;br /&gt;
    case params[:assignment_id]&lt;br /&gt;
    when '754' # expert grades of Wiki contribution (754)&lt;br /&gt;
      body.prepend('&amp;quot;expert_grades&amp;quot;: {&amp;quot;submission25030&amp;quot;:95,&amp;quot;submission25031&amp;quot;:92,&amp;quot;submission25033&amp;quot;:88,&amp;quot;submission25034&amp;quot;:98,&amp;quot;submission25035&amp;quot;:100,&amp;quot;submission25037&amp;quot;:95,&amp;quot;submission25038&amp;quot;:95,&amp;quot;submission25039&amp;quot;:93,&amp;quot;submission25040&amp;quot;:96,&amp;quot;submission25041&amp;quot;:90,&amp;quot;submission25042&amp;quot;:100,&amp;quot;submission25046&amp;quot;:95,&amp;quot;submission25049&amp;quot;:90,&amp;quot;submission25050&amp;quot;:88,&amp;quot;submission25053&amp;quot;:91,&amp;quot;submission25054&amp;quot;:96,&amp;quot;submission25055&amp;quot;:94,&amp;quot;submission25059&amp;quot;:96,&amp;quot;submission25071&amp;quot;:85,&amp;quot;submission25082&amp;quot;:100,&amp;quot;submission25086&amp;quot;:95,&amp;quot;submission25097&amp;quot;:90,&amp;quot;submission25098&amp;quot;:85,&amp;quot;submission25102&amp;quot;:97,&amp;quot;submission25103&amp;quot;:94,&amp;quot;submission25105&amp;quot;:98,&amp;quot;submission25114&amp;quot;:95,&amp;quot;submission25115&amp;quot;:94},')&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_quiz_scores&lt;br /&gt;
  # It gets the assignment id list and generates the json on quiz scores of&lt;br /&gt;
  # those assignments.&lt;br /&gt;
  # Finally processes quiz string is prepended to the request body, received&lt;br /&gt;
  # as an argument, and returns the body to prepare_request_body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_quiz_scores(body)&lt;br /&gt;
    flash[:additional_info] = 'add quiz scores'&lt;br /&gt;
    assignment_id_list_quiz = get_assignment_id_list(params[:assignment_id].to_i, params[:another_assignment_id].to_i)&lt;br /&gt;
    quiz_str =  generate_json_for_quiz_scores(assignment_id_list_quiz).to_json&lt;br /&gt;
    quiz_str[0] = '' # remove first {&lt;br /&gt;
    quiz_str.prepend('&amp;quot;quiz_scores&amp;quot;:{') # add quiz_scores tag&lt;br /&gt;
    quiz_str += ','&lt;br /&gt;
    quiz_str = quiz_str.gsub('&amp;quot;N/A&amp;quot;', '20.0') # replace N/A values with 20&lt;br /&gt;
    body.prepend(quiz_str)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_hamer_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial hamer reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_lauw_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial lauw reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_assignment_id_list&lt;br /&gt;
  # This method on receipt of individual assignment IDs returns a list with all&lt;br /&gt;
  # the assignment IDs appended into a data structure&lt;br /&gt;
  # This function accepts 2 arguments, with the second argument being optional,&lt;br /&gt;
  # and returns the list assignment_id_list&lt;br /&gt;
  # If the second argument is 0, it is not appended to the list.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_one: first assignment id (required)&lt;br /&gt;
  #   assignment_id_two: second assignment id (optional)&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   assignment_id_list: list containing two assignment ids&lt;br /&gt;
  def get_assignment_id_list(assignment_id_one, assignment_id_two = 0)&lt;br /&gt;
    assignment_id_list = []&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_one&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_two unless assignment_id_two.zero?&lt;br /&gt;
    assignment_id_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_flash_messages&lt;br /&gt;
  # This method sets the flash messages to pass on to the next request i.e&lt;br /&gt;
  # the request redirected to the client&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_flash_messages(post_req)&lt;br /&gt;
    flash[:assignment_id] = params[:assignment_id]&lt;br /&gt;
    flash[:round_num] = params[:round_num]&lt;br /&gt;
    flash[:algorithm] = params[:algorithm]&lt;br /&gt;
    flash[:another_assignment_id] = params[:another_assignment_id]&lt;br /&gt;
    flash[:request_body] = post_req.body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_additional_info_details&lt;br /&gt;
  # This method sets the additional info details based on the options&lt;br /&gt;
  # selected in the additional information section. We populate the request&lt;br /&gt;
  # based on the selections&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_additional_info_details(post_req)&lt;br /&gt;
    if params[:checkbox][:expert_grade] == 'Add expert grades'&lt;br /&gt;
      add_expert_grades(post_req.body)&lt;br /&gt;
    elsif params[:checkbox][:hamer] == 'Add initial Hamer reputation values'&lt;br /&gt;
      add_hamer_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:lauw] == 'Add initial Lauw reputation values'&lt;br /&gt;
      add_lauw_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:quiz] == 'Add quiz scores'&lt;br /&gt;
      add_quiz_scores(post_req.body)&lt;br /&gt;
    else&lt;br /&gt;
      flash[:additional_info] = ''&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: prepare_request_body&lt;br /&gt;
  # This method is responsible for preparing the request body in a proper format&lt;br /&gt;
  # to send to the server. It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # It finally sends the prepared request body back to the send_post_request&lt;br /&gt;
  # method.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def prepare_request_body&lt;br /&gt;
    reputation_web_service_path = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).path&lt;br /&gt;
    post_req = Net::HTTP::Post.new(reputation_web_service_path, { 'Content-Type' =&amp;gt; 'application/json', 'charset' =&amp;gt; 'utf-8' })&lt;br /&gt;
    curr_assignment_id = (params[:assignment_id].empty? ? '754' : params[:assignment_id])&lt;br /&gt;
    assignment_id_list_peers = get_assignment_id_list(curr_assignment_id, params[:another_assignment_id].to_i)&lt;br /&gt;
&lt;br /&gt;
    post_req.body = generate_json_for_peer_reviews(assignment_id_list_peers, params[:round_num].to_i).to_json&lt;br /&gt;
&lt;br /&gt;
    post_req.body[0] = '' # remove the first '{'&lt;br /&gt;
    add_additional_info_details post_req&lt;br /&gt;
    post_req.body.prepend('{')&lt;br /&gt;
    add_flash_messages post_req&lt;br /&gt;
    post_req&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: send_post_request&lt;br /&gt;
  # This method calls the prepare_request_body function to get a prepared&lt;br /&gt;
  # request body in proper format to send to the server.&lt;br /&gt;
  # It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # We redirect to the client url to display the results.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def send_post_request&lt;br /&gt;
    post_req = prepare_request_body&lt;br /&gt;
    reputation_web_service_hostname = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).host&lt;br /&gt;
    reputation_response = Net::HTTP.new(reputation_web_service_hostname).start { |http| http.request(post_req) }&lt;br /&gt;
    if %w[400 500].include?(reputation_response.code)&lt;br /&gt;
      flash[:error] = 'Post Request Failed'&lt;br /&gt;
    else&lt;br /&gt;
      process_response_body(reputation_response)&lt;br /&gt;
    end&lt;br /&gt;
    redirect_to action: 'client'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def calculate_reputation_Score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
As we can see the results are skewed towards lower values. This is because we added the inclusion of nil values and assumed them to be zero. Because of this assumption, the scores are skewed towards lower values. It must be considered median/random to have lesser skew.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
Upon testing the original reputation web server's Hamer values, we found discrepancies between the expected values and the values returned by the web service. This led us to conclude that the webservice implementation was incorrect.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154660</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154660"/>
		<updated>2024-03-25T04:23:55Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 3: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As seen above, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'json'&lt;br /&gt;
require 'uri'&lt;br /&gt;
require 'net/http'&lt;br /&gt;
require 'openssl'&lt;br /&gt;
require 'base64'&lt;br /&gt;
&lt;br /&gt;
# Expertiza allows student work to be peer-reviewed, since peers can provide&lt;br /&gt;
# more feedback than the instructor can.&lt;br /&gt;
# However, if we want to assure that all students receive competent feedback,&lt;br /&gt;
# or even use peer-assigned grades,&lt;br /&gt;
# we need a way to judge which peer reviewers are most credible. The solution&lt;br /&gt;
# is the reputation system.&lt;br /&gt;
# Reputation systems have been deployed as web services, peer-review&lt;br /&gt;
# researchers will be able to use them to calculate scores on assignments,&lt;br /&gt;
# both past and present (past data can be used to tune the algorithms).&lt;br /&gt;
#&lt;br /&gt;
# This file is the controller to calculate the reputation scores.&lt;br /&gt;
# A 'reputation' measures how close a reviewer's scores are to other reviewers'&lt;br /&gt;
# scores.&lt;br /&gt;
# This controller implements the calculation of reputation scores.&lt;br /&gt;
class ReputationWebServiceController &amp;lt; ApplicationController&lt;br /&gt;
  include AuthorizationHelper&lt;br /&gt;
&lt;br /&gt;
  # Method: action_allowed&lt;br /&gt;
  # This method checks if the currently authenticated user has the authorization&lt;br /&gt;
  # to perform certain actions&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   true if the user has privileges to perform the action else returns false&lt;br /&gt;
  def action_allowed?&lt;br /&gt;
    current_user_has_ta_privileges?&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_max_question_score&lt;br /&gt;
  # This method receives a set of answers and gets the maximum question score&lt;br /&gt;
  # Params&lt;br /&gt;
  #   answers: set of answers&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   if no error returns max_question_score of first question else 1&lt;br /&gt;
  def get_max_question_score(answers)&lt;br /&gt;
    begin&lt;br /&gt;
      answers.first.question.questionnaire.max_question_score&lt;br /&gt;
    rescue StandardError&lt;br /&gt;
      1&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_valid_answers_for_response&lt;br /&gt;
  # This method receives response and filters the valid answers list of the&lt;br /&gt;
  # response ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   set of valid answers (returns nil if empty)&lt;br /&gt;
  def get_valid_answers_for_response(response)&lt;br /&gt;
    answers = Answer.where(response_id: response.id)&lt;br /&gt;
    valid_answer = answers.select { |answer| (answer.question.type == 'Criterion') &amp;amp;&amp;amp; !answer.answer.nil? }&lt;br /&gt;
    valid_answer.empty? ? nil : valid_answer&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: calculate_peer_review_grade&lt;br /&gt;
  # This method calculates a cumulative review grade with respect to the set of valid answers&lt;br /&gt;
  # Params&lt;br /&gt;
  #   valid_answer: valid answer to get weight of the answer's question&lt;br /&gt;
  #   max_question_score: used to calculate maximum score for peer review grade&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grade&lt;br /&gt;
  def calculate_peer_review_grade(valid_answer, max_question_score)&lt;br /&gt;
    weighted_score_sum = valid_answer.map { |answer| answer.answer * answer.question.weight }.inject(:+)&lt;br /&gt;
    question_weight_sum = valid_answer.sum { |answer| answer.question.weight }&lt;br /&gt;
    peer_review_grade = 100.0 * weighted_score_sum / (question_weight_sum * max_question_score)&lt;br /&gt;
    peer_review_grade.round(4)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews_for_responses&lt;br /&gt;
  # This method calculates the peer review grade for each valid response&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reviewer_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   team_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   valid_response: to get the valid answer for each valid response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grades_list&lt;br /&gt;
  def get_peer_reviews_for_responses(reviewer_id, team_id, valid_response)&lt;br /&gt;
    peer_review_grades_list = []&lt;br /&gt;
    valid_response.each do |response|&lt;br /&gt;
      valid_answer = get_valid_answers_for_response(response)&lt;br /&gt;
      next if valid_answer.nil?&lt;br /&gt;
&lt;br /&gt;
      review_grade = calculate_peer_review_grade(valid_answer, get_max_question_score(valid_answer))&lt;br /&gt;
      peer_review_grades_list &amp;lt;&amp;lt; [reviewer_id, team_id, review_grade]&lt;br /&gt;
    end&lt;br /&gt;
    peer_review_grades_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews&lt;br /&gt;
  # This method retrieves all the reviews for the submissions&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: used to retrieve response map&lt;br /&gt;
  #   round_num: used to retrieve round_num for the valid response&lt;br /&gt;
  #   has_topic: to get the topic condition&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which corresponds to the return of&lt;br /&gt;
  #     get_peer_reviews_for_responses method and appended to the raw_data_array&lt;br /&gt;
  def get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    ReviewResponseMap.where('reviewed_object_id in (?) and calibrate_to = ?', assignment_id_list, false).each do |response_map|&lt;br /&gt;
      reviewer = response_map.reviewer.user&lt;br /&gt;
      team = AssignmentTeam.find(response_map.reviewee_id)&lt;br /&gt;
      topic_condition = ((has_topic &amp;amp;&amp;amp; (SignedUpTeam.where(team_id: team.id).first.is_waitlisted == false)) || !has_topic)&lt;br /&gt;
      last_valid_response = response_map.response.select { |r| r.round == round_num }.max&lt;br /&gt;
      valid_response = [last_valid_response] unless last_valid_response.nil?&lt;br /&gt;
      if (topic_condition == true) &amp;amp;&amp;amp; !valid_response.nil? &amp;amp;&amp;amp; !valid_response.empty?&lt;br /&gt;
        raw_data_array += get_peer_reviews_for_responses(reviewer.id, team.id, valid_response)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_ids_list&lt;br /&gt;
  # This method maps each object to the corresponding object's ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   tables: any table&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   id in the tables&lt;br /&gt;
  def get_ids_list(tables)&lt;br /&gt;
    tables.map(&amp;amp;:id)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_scores&lt;br /&gt;
  # This method gets the quiz score of each participant for respective reviewee&lt;br /&gt;
  # Params&lt;br /&gt;
  #   team_ids: list of team IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which is a list of participant, reviewee and the participant's quiz score&lt;br /&gt;
  def get_scores(team_ids)&lt;br /&gt;
    quiz_questionnnaires = QuizQuestionnaire.where('instructor_id in (?)', team_ids)&lt;br /&gt;
    quiz_questionnnaire_ids = get_ids_list(quiz_questionnnaires)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    QuizResponseMap.where('reviewed_object_id in (?)', quiz_questionnnaire_ids).each do |response_map|&lt;br /&gt;
      quiz_score = response_map.quiz_score&lt;br /&gt;
      participant = Participant.find(response_map.reviewer_id)&lt;br /&gt;
      raw_data_array &amp;lt;&amp;lt; [participant.user_id, response_map.reviewee_id, quiz_score]&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_quiz_score&lt;br /&gt;
  # This method gets the quiz score of assignments&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: returned by get_scores method, which is a list of participant,&lt;br /&gt;
  #     reviewee and the participant's quiz score&lt;br /&gt;
  def get_quiz_score(assignment_id_list)&lt;br /&gt;
    teams = AssignmentTeam.where('parent_id in (?)', assignment_id_list)&lt;br /&gt;
    team_ids = get_ids_list(teams)&lt;br /&gt;
    get_scores(team_ids)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_body&lt;br /&gt;
  # This method generates json body for the peer reviews and quiz scores&lt;br /&gt;
  # Params&lt;br /&gt;
  #   results: list of grades with corresponding team/participant ID,&lt;br /&gt;
  #     reviewee ID and their score&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: returns the formatted body after sorting the hash&lt;br /&gt;
  def generate_json_body(results)&lt;br /&gt;
    request_body = {}&lt;br /&gt;
    results.each_with_index do |record, _index|&lt;br /&gt;
      request_body['submission' + record[1].to_s] = {} unless request_body.key?('submission' + record[1].to_s)&lt;br /&gt;
      request_body['submission' + record[1].to_s]['stu' + record[0].to_s] = record[2]&lt;br /&gt;
    end&lt;br /&gt;
    # sort the 2-dimension hash&lt;br /&gt;
    request_body.each { |k, v| request_body[k] = v.sort.to_h }&lt;br /&gt;
    request_body.sort.to_h&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_peer_reviews&lt;br /&gt;
  # This method retrieves all the peer reviews associated with&lt;br /&gt;
  # the assignment id list by calling the get_peer_reviews method.&lt;br /&gt;
  # It then formats the peer-review list in JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  #   round_num: round number of the review&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with the formatted peer review data.&lt;br /&gt;
  def generate_json_for_peer_reviews(assignment_id_list, round_num = 2)&lt;br /&gt;
    has_topic = !SignUpTopic.where(assignment_id: assignment_id_list[0]).empty?&lt;br /&gt;
&lt;br /&gt;
    peer_reviews_list = get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    request_body = generate_json_body(peer_reviews_list)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_quiz_scores&lt;br /&gt;
  # This method accepts a list of assignment ids as an argument.&lt;br /&gt;
  # It then calls the get_quiz_score method on the list to get&lt;br /&gt;
  # maps of teams and scores for the given assignments.&lt;br /&gt;
  # The map is then formatted into JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with quiz scores&lt;br /&gt;
  def generate_json_for_quiz_scores(assignment_id_list)&lt;br /&gt;
    participant_reviewee_map = get_quiz_score(assignment_id_list)&lt;br /&gt;
    request_body = generate_json_body(participant_reviewee_map)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: client&lt;br /&gt;
  # This method is called when the url reputation_web_service/client&lt;br /&gt;
  # is hit using GET method.&lt;br /&gt;
  # This renders the client.html.erb&lt;br /&gt;
  # It also populates the instance variables to be used in the views&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def client&lt;br /&gt;
    @max_assignment_id = Assignment.last.id&lt;br /&gt;
    @assignment = Assignment.find(flash[:assignment_id]) rescue nil&lt;br /&gt;
    @another_assignment = Assignment.find(flash[:another_assignment_id]) rescue nil&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: update_participants_reputation&lt;br /&gt;
  # This method accepts the response body in the JSON format.&lt;br /&gt;
  # It then parses the JSON and updates the reputation scores of the&lt;br /&gt;
  # participants in the list.&lt;br /&gt;
  # If the alg variable is not  Hamer/ Lauv, the updation step is skipped.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def update_participants_reputation(reputation_response)&lt;br /&gt;
    JSON.parse(reputation_response.body.to_s).each do |reputation_algorithm, user_resputation_list|&lt;br /&gt;
      next unless %w[Hamer Lauw].include?(reputation_algorithm)&lt;br /&gt;
&lt;br /&gt;
      user_resputation_list.each do |user_id, reputation|&lt;br /&gt;
        Participant.find_by(user_id: user_id).update(reputation_algorithm.to_sym =&amp;gt; reputation) unless /leniency/ =~ user_id.to_s&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: process_response_body&lt;br /&gt;
  # This method gets the control after receiving a response from the server.&lt;br /&gt;
  # It receives the response body as an argument&lt;br /&gt;
  # It updates the instance variables related to the response.&lt;br /&gt;
  # It then calls the update_participants_reputation to update the reputation&lt;br /&gt;
  # scores received in the response body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def process_response_body(reputation_response)&lt;br /&gt;
    flash[:response] = reputation_response&lt;br /&gt;
    flash[:response_body] = reputation_response.body&lt;br /&gt;
    update_participants_reputation(reputation_response)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_expert_grades&lt;br /&gt;
  # It prepends the request body with the expert grades pertaining&lt;br /&gt;
  # to the default wiki contribution case of 754.&lt;br /&gt;
  # It receives the request body as an argument and prepends it&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_expert_grades(body)&lt;br /&gt;
    flash[:additional_info] = 'add expert grades'&lt;br /&gt;
    case params[:assignment_id]&lt;br /&gt;
    when '754' # expert grades of Wiki contribution (754)&lt;br /&gt;
      body.prepend('&amp;quot;expert_grades&amp;quot;: {&amp;quot;submission25030&amp;quot;:95,&amp;quot;submission25031&amp;quot;:92,&amp;quot;submission25033&amp;quot;:88,&amp;quot;submission25034&amp;quot;:98,&amp;quot;submission25035&amp;quot;:100,&amp;quot;submission25037&amp;quot;:95,&amp;quot;submission25038&amp;quot;:95,&amp;quot;submission25039&amp;quot;:93,&amp;quot;submission25040&amp;quot;:96,&amp;quot;submission25041&amp;quot;:90,&amp;quot;submission25042&amp;quot;:100,&amp;quot;submission25046&amp;quot;:95,&amp;quot;submission25049&amp;quot;:90,&amp;quot;submission25050&amp;quot;:88,&amp;quot;submission25053&amp;quot;:91,&amp;quot;submission25054&amp;quot;:96,&amp;quot;submission25055&amp;quot;:94,&amp;quot;submission25059&amp;quot;:96,&amp;quot;submission25071&amp;quot;:85,&amp;quot;submission25082&amp;quot;:100,&amp;quot;submission25086&amp;quot;:95,&amp;quot;submission25097&amp;quot;:90,&amp;quot;submission25098&amp;quot;:85,&amp;quot;submission25102&amp;quot;:97,&amp;quot;submission25103&amp;quot;:94,&amp;quot;submission25105&amp;quot;:98,&amp;quot;submission25114&amp;quot;:95,&amp;quot;submission25115&amp;quot;:94},')&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_quiz_scores&lt;br /&gt;
  # It gets the assignment id list and generates the json on quiz scores of&lt;br /&gt;
  # those assignments.&lt;br /&gt;
  # Finally processes quiz string is prepended to the request body, received&lt;br /&gt;
  # as an argument, and returns the body to prepare_request_body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_quiz_scores(body)&lt;br /&gt;
    flash[:additional_info] = 'add quiz scores'&lt;br /&gt;
    assignment_id_list_quiz = get_assignment_id_list(params[:assignment_id].to_i, params[:another_assignment_id].to_i)&lt;br /&gt;
    quiz_str =  generate_json_for_quiz_scores(assignment_id_list_quiz).to_json&lt;br /&gt;
    quiz_str[0] = '' # remove first {&lt;br /&gt;
    quiz_str.prepend('&amp;quot;quiz_scores&amp;quot;:{') # add quiz_scores tag&lt;br /&gt;
    quiz_str += ','&lt;br /&gt;
    quiz_str = quiz_str.gsub('&amp;quot;N/A&amp;quot;', '20.0') # replace N/A values with 20&lt;br /&gt;
    body.prepend(quiz_str)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_hamer_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial hamer reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_lauw_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial lauw reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_assignment_id_list&lt;br /&gt;
  # This method on receipt of individual assignment IDs returns a list with all&lt;br /&gt;
  # the assignment IDs appended into a data structure&lt;br /&gt;
  # This function accepts 2 arguments, with the second argument being optional,&lt;br /&gt;
  # and returns the list assignment_id_list&lt;br /&gt;
  # If the second argument is 0, it is not appended to the list.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_one: first assignment id (required)&lt;br /&gt;
  #   assignment_id_two: second assignment id (optional)&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   assignment_id_list: list containing two assignment ids&lt;br /&gt;
  def get_assignment_id_list(assignment_id_one, assignment_id_two = 0)&lt;br /&gt;
    assignment_id_list = []&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_one&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_two unless assignment_id_two.zero?&lt;br /&gt;
    assignment_id_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_flash_messages&lt;br /&gt;
  # This method sets the flash messages to pass on to the next request i.e&lt;br /&gt;
  # the request redirected to the client&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_flash_messages(post_req)&lt;br /&gt;
    flash[:assignment_id] = params[:assignment_id]&lt;br /&gt;
    flash[:round_num] = params[:round_num]&lt;br /&gt;
    flash[:algorithm] = params[:algorithm]&lt;br /&gt;
    flash[:another_assignment_id] = params[:another_assignment_id]&lt;br /&gt;
    flash[:request_body] = post_req.body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_additional_info_details&lt;br /&gt;
  # This method sets the additional info details based on the options&lt;br /&gt;
  # selected in the additional information section. We populate the request&lt;br /&gt;
  # based on the selections&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_additional_info_details(post_req)&lt;br /&gt;
    if params[:checkbox][:expert_grade] == 'Add expert grades'&lt;br /&gt;
      add_expert_grades(post_req.body)&lt;br /&gt;
    elsif params[:checkbox][:hamer] == 'Add initial Hamer reputation values'&lt;br /&gt;
      add_hamer_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:lauw] == 'Add initial Lauw reputation values'&lt;br /&gt;
      add_lauw_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:quiz] == 'Add quiz scores'&lt;br /&gt;
      add_quiz_scores(post_req.body)&lt;br /&gt;
    else&lt;br /&gt;
      flash[:additional_info] = ''&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: prepare_request_body&lt;br /&gt;
  # This method is responsible for preparing the request body in a proper format&lt;br /&gt;
  # to send to the server. It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # It finally sends the prepared request body back to the send_post_request&lt;br /&gt;
  # method.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def prepare_request_body&lt;br /&gt;
    reputation_web_service_path = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).path&lt;br /&gt;
    post_req = Net::HTTP::Post.new(reputation_web_service_path, { 'Content-Type' =&amp;gt; 'application/json', 'charset' =&amp;gt; 'utf-8' })&lt;br /&gt;
    curr_assignment_id = (params[:assignment_id].empty? ? '754' : params[:assignment_id])&lt;br /&gt;
    assignment_id_list_peers = get_assignment_id_list(curr_assignment_id, params[:another_assignment_id].to_i)&lt;br /&gt;
&lt;br /&gt;
    post_req.body = generate_json_for_peer_reviews(assignment_id_list_peers, params[:round_num].to_i).to_json&lt;br /&gt;
&lt;br /&gt;
    post_req.body[0] = '' # remove the first '{'&lt;br /&gt;
    add_additional_info_details post_req&lt;br /&gt;
    post_req.body.prepend('{')&lt;br /&gt;
    add_flash_messages post_req&lt;br /&gt;
    post_req&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: send_post_request&lt;br /&gt;
  # This method calls the prepare_request_body function to get a prepared&lt;br /&gt;
  # request body in proper format to send to the server.&lt;br /&gt;
  # It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # We redirect to the client url to display the results.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def send_post_request&lt;br /&gt;
    post_req = prepare_request_body&lt;br /&gt;
    reputation_web_service_hostname = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).host&lt;br /&gt;
    reputation_response = Net::HTTP.new(reputation_web_service_hostname).start { |http| http.request(post_req) }&lt;br /&gt;
    if %w[400 500].include?(reputation_response.code)&lt;br /&gt;
      flash[:error] = 'Post Request Failed'&lt;br /&gt;
    else&lt;br /&gt;
      process_response_body(reputation_response)&lt;br /&gt;
    end&lt;br /&gt;
    redirect_to action: 'client'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def calculate_reputation_Score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 4: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
As we can see the results are skewed towards lower values. This is because we added the inclusion of nil values and assumed them to be zero. Because of this assumption, the scores are skewed towards lower values. It must be considered median/random to have lesser skew.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
Upon testing the original reputation web server's Hamer values, we found discrepancies between the expected values and the values returned by the web service. This led us to conclude that the webservice implementation was incorrect.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154659</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154659"/>
		<updated>2024-03-25T04:23:42Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 2: Verify the correctness of the reputation web server's Hamer values */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As seen above, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'json'&lt;br /&gt;
require 'uri'&lt;br /&gt;
require 'net/http'&lt;br /&gt;
require 'openssl'&lt;br /&gt;
require 'base64'&lt;br /&gt;
&lt;br /&gt;
# Expertiza allows student work to be peer-reviewed, since peers can provide&lt;br /&gt;
# more feedback than the instructor can.&lt;br /&gt;
# However, if we want to assure that all students receive competent feedback,&lt;br /&gt;
# or even use peer-assigned grades,&lt;br /&gt;
# we need a way to judge which peer reviewers are most credible. The solution&lt;br /&gt;
# is the reputation system.&lt;br /&gt;
# Reputation systems have been deployed as web services, peer-review&lt;br /&gt;
# researchers will be able to use them to calculate scores on assignments,&lt;br /&gt;
# both past and present (past data can be used to tune the algorithms).&lt;br /&gt;
#&lt;br /&gt;
# This file is the controller to calculate the reputation scores.&lt;br /&gt;
# A 'reputation' measures how close a reviewer's scores are to other reviewers'&lt;br /&gt;
# scores.&lt;br /&gt;
# This controller implements the calculation of reputation scores.&lt;br /&gt;
class ReputationWebServiceController &amp;lt; ApplicationController&lt;br /&gt;
  include AuthorizationHelper&lt;br /&gt;
&lt;br /&gt;
  # Method: action_allowed&lt;br /&gt;
  # This method checks if the currently authenticated user has the authorization&lt;br /&gt;
  # to perform certain actions&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   true if the user has privileges to perform the action else returns false&lt;br /&gt;
  def action_allowed?&lt;br /&gt;
    current_user_has_ta_privileges?&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_max_question_score&lt;br /&gt;
  # This method receives a set of answers and gets the maximum question score&lt;br /&gt;
  # Params&lt;br /&gt;
  #   answers: set of answers&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   if no error returns max_question_score of first question else 1&lt;br /&gt;
  def get_max_question_score(answers)&lt;br /&gt;
    begin&lt;br /&gt;
      answers.first.question.questionnaire.max_question_score&lt;br /&gt;
    rescue StandardError&lt;br /&gt;
      1&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_valid_answers_for_response&lt;br /&gt;
  # This method receives response and filters the valid answers list of the&lt;br /&gt;
  # response ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   set of valid answers (returns nil if empty)&lt;br /&gt;
  def get_valid_answers_for_response(response)&lt;br /&gt;
    answers = Answer.where(response_id: response.id)&lt;br /&gt;
    valid_answer = answers.select { |answer| (answer.question.type == 'Criterion') &amp;amp;&amp;amp; !answer.answer.nil? }&lt;br /&gt;
    valid_answer.empty? ? nil : valid_answer&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: calculate_peer_review_grade&lt;br /&gt;
  # This method calculates a cumulative review grade with respect to the set of valid answers&lt;br /&gt;
  # Params&lt;br /&gt;
  #   valid_answer: valid answer to get weight of the answer's question&lt;br /&gt;
  #   max_question_score: used to calculate maximum score for peer review grade&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grade&lt;br /&gt;
  def calculate_peer_review_grade(valid_answer, max_question_score)&lt;br /&gt;
    weighted_score_sum = valid_answer.map { |answer| answer.answer * answer.question.weight }.inject(:+)&lt;br /&gt;
    question_weight_sum = valid_answer.sum { |answer| answer.question.weight }&lt;br /&gt;
    peer_review_grade = 100.0 * weighted_score_sum / (question_weight_sum * max_question_score)&lt;br /&gt;
    peer_review_grade.round(4)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews_for_responses&lt;br /&gt;
  # This method calculates the peer review grade for each valid response&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reviewer_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   team_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   valid_response: to get the valid answer for each valid response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grades_list&lt;br /&gt;
  def get_peer_reviews_for_responses(reviewer_id, team_id, valid_response)&lt;br /&gt;
    peer_review_grades_list = []&lt;br /&gt;
    valid_response.each do |response|&lt;br /&gt;
      valid_answer = get_valid_answers_for_response(response)&lt;br /&gt;
      next if valid_answer.nil?&lt;br /&gt;
&lt;br /&gt;
      review_grade = calculate_peer_review_grade(valid_answer, get_max_question_score(valid_answer))&lt;br /&gt;
      peer_review_grades_list &amp;lt;&amp;lt; [reviewer_id, team_id, review_grade]&lt;br /&gt;
    end&lt;br /&gt;
    peer_review_grades_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews&lt;br /&gt;
  # This method retrieves all the reviews for the submissions&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: used to retrieve response map&lt;br /&gt;
  #   round_num: used to retrieve round_num for the valid response&lt;br /&gt;
  #   has_topic: to get the topic condition&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which corresponds to the return of&lt;br /&gt;
  #     get_peer_reviews_for_responses method and appended to the raw_data_array&lt;br /&gt;
  def get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    ReviewResponseMap.where('reviewed_object_id in (?) and calibrate_to = ?', assignment_id_list, false).each do |response_map|&lt;br /&gt;
      reviewer = response_map.reviewer.user&lt;br /&gt;
      team = AssignmentTeam.find(response_map.reviewee_id)&lt;br /&gt;
      topic_condition = ((has_topic &amp;amp;&amp;amp; (SignedUpTeam.where(team_id: team.id).first.is_waitlisted == false)) || !has_topic)&lt;br /&gt;
      last_valid_response = response_map.response.select { |r| r.round == round_num }.max&lt;br /&gt;
      valid_response = [last_valid_response] unless last_valid_response.nil?&lt;br /&gt;
      if (topic_condition == true) &amp;amp;&amp;amp; !valid_response.nil? &amp;amp;&amp;amp; !valid_response.empty?&lt;br /&gt;
        raw_data_array += get_peer_reviews_for_responses(reviewer.id, team.id, valid_response)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_ids_list&lt;br /&gt;
  # This method maps each object to the corresponding object's ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   tables: any table&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   id in the tables&lt;br /&gt;
  def get_ids_list(tables)&lt;br /&gt;
    tables.map(&amp;amp;:id)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_scores&lt;br /&gt;
  # This method gets the quiz score of each participant for respective reviewee&lt;br /&gt;
  # Params&lt;br /&gt;
  #   team_ids: list of team IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which is a list of participant, reviewee and the participant's quiz score&lt;br /&gt;
  def get_scores(team_ids)&lt;br /&gt;
    quiz_questionnnaires = QuizQuestionnaire.where('instructor_id in (?)', team_ids)&lt;br /&gt;
    quiz_questionnnaire_ids = get_ids_list(quiz_questionnnaires)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    QuizResponseMap.where('reviewed_object_id in (?)', quiz_questionnnaire_ids).each do |response_map|&lt;br /&gt;
      quiz_score = response_map.quiz_score&lt;br /&gt;
      participant = Participant.find(response_map.reviewer_id)&lt;br /&gt;
      raw_data_array &amp;lt;&amp;lt; [participant.user_id, response_map.reviewee_id, quiz_score]&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_quiz_score&lt;br /&gt;
  # This method gets the quiz score of assignments&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: returned by get_scores method, which is a list of participant,&lt;br /&gt;
  #     reviewee and the participant's quiz score&lt;br /&gt;
  def get_quiz_score(assignment_id_list)&lt;br /&gt;
    teams = AssignmentTeam.where('parent_id in (?)', assignment_id_list)&lt;br /&gt;
    team_ids = get_ids_list(teams)&lt;br /&gt;
    get_scores(team_ids)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_body&lt;br /&gt;
  # This method generates json body for the peer reviews and quiz scores&lt;br /&gt;
  # Params&lt;br /&gt;
  #   results: list of grades with corresponding team/participant ID,&lt;br /&gt;
  #     reviewee ID and their score&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: returns the formatted body after sorting the hash&lt;br /&gt;
  def generate_json_body(results)&lt;br /&gt;
    request_body = {}&lt;br /&gt;
    results.each_with_index do |record, _index|&lt;br /&gt;
      request_body['submission' + record[1].to_s] = {} unless request_body.key?('submission' + record[1].to_s)&lt;br /&gt;
      request_body['submission' + record[1].to_s]['stu' + record[0].to_s] = record[2]&lt;br /&gt;
    end&lt;br /&gt;
    # sort the 2-dimension hash&lt;br /&gt;
    request_body.each { |k, v| request_body[k] = v.sort.to_h }&lt;br /&gt;
    request_body.sort.to_h&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_peer_reviews&lt;br /&gt;
  # This method retrieves all the peer reviews associated with&lt;br /&gt;
  # the assignment id list by calling the get_peer_reviews method.&lt;br /&gt;
  # It then formats the peer-review list in JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  #   round_num: round number of the review&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with the formatted peer review data.&lt;br /&gt;
  def generate_json_for_peer_reviews(assignment_id_list, round_num = 2)&lt;br /&gt;
    has_topic = !SignUpTopic.where(assignment_id: assignment_id_list[0]).empty?&lt;br /&gt;
&lt;br /&gt;
    peer_reviews_list = get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    request_body = generate_json_body(peer_reviews_list)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_quiz_scores&lt;br /&gt;
  # This method accepts a list of assignment ids as an argument.&lt;br /&gt;
  # It then calls the get_quiz_score method on the list to get&lt;br /&gt;
  # maps of teams and scores for the given assignments.&lt;br /&gt;
  # The map is then formatted into JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with quiz scores&lt;br /&gt;
  def generate_json_for_quiz_scores(assignment_id_list)&lt;br /&gt;
    participant_reviewee_map = get_quiz_score(assignment_id_list)&lt;br /&gt;
    request_body = generate_json_body(participant_reviewee_map)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: client&lt;br /&gt;
  # This method is called when the url reputation_web_service/client&lt;br /&gt;
  # is hit using GET method.&lt;br /&gt;
  # This renders the client.html.erb&lt;br /&gt;
  # It also populates the instance variables to be used in the views&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def client&lt;br /&gt;
    @max_assignment_id = Assignment.last.id&lt;br /&gt;
    @assignment = Assignment.find(flash[:assignment_id]) rescue nil&lt;br /&gt;
    @another_assignment = Assignment.find(flash[:another_assignment_id]) rescue nil&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: update_participants_reputation&lt;br /&gt;
  # This method accepts the response body in the JSON format.&lt;br /&gt;
  # It then parses the JSON and updates the reputation scores of the&lt;br /&gt;
  # participants in the list.&lt;br /&gt;
  # If the alg variable is not  Hamer/ Lauv, the updation step is skipped.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def update_participants_reputation(reputation_response)&lt;br /&gt;
    JSON.parse(reputation_response.body.to_s).each do |reputation_algorithm, user_resputation_list|&lt;br /&gt;
      next unless %w[Hamer Lauw].include?(reputation_algorithm)&lt;br /&gt;
&lt;br /&gt;
      user_resputation_list.each do |user_id, reputation|&lt;br /&gt;
        Participant.find_by(user_id: user_id).update(reputation_algorithm.to_sym =&amp;gt; reputation) unless /leniency/ =~ user_id.to_s&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: process_response_body&lt;br /&gt;
  # This method gets the control after receiving a response from the server.&lt;br /&gt;
  # It receives the response body as an argument&lt;br /&gt;
  # It updates the instance variables related to the response.&lt;br /&gt;
  # It then calls the update_participants_reputation to update the reputation&lt;br /&gt;
  # scores received in the response body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def process_response_body(reputation_response)&lt;br /&gt;
    flash[:response] = reputation_response&lt;br /&gt;
    flash[:response_body] = reputation_response.body&lt;br /&gt;
    update_participants_reputation(reputation_response)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_expert_grades&lt;br /&gt;
  # It prepends the request body with the expert grades pertaining&lt;br /&gt;
  # to the default wiki contribution case of 754.&lt;br /&gt;
  # It receives the request body as an argument and prepends it&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_expert_grades(body)&lt;br /&gt;
    flash[:additional_info] = 'add expert grades'&lt;br /&gt;
    case params[:assignment_id]&lt;br /&gt;
    when '754' # expert grades of Wiki contribution (754)&lt;br /&gt;
      body.prepend('&amp;quot;expert_grades&amp;quot;: {&amp;quot;submission25030&amp;quot;:95,&amp;quot;submission25031&amp;quot;:92,&amp;quot;submission25033&amp;quot;:88,&amp;quot;submission25034&amp;quot;:98,&amp;quot;submission25035&amp;quot;:100,&amp;quot;submission25037&amp;quot;:95,&amp;quot;submission25038&amp;quot;:95,&amp;quot;submission25039&amp;quot;:93,&amp;quot;submission25040&amp;quot;:96,&amp;quot;submission25041&amp;quot;:90,&amp;quot;submission25042&amp;quot;:100,&amp;quot;submission25046&amp;quot;:95,&amp;quot;submission25049&amp;quot;:90,&amp;quot;submission25050&amp;quot;:88,&amp;quot;submission25053&amp;quot;:91,&amp;quot;submission25054&amp;quot;:96,&amp;quot;submission25055&amp;quot;:94,&amp;quot;submission25059&amp;quot;:96,&amp;quot;submission25071&amp;quot;:85,&amp;quot;submission25082&amp;quot;:100,&amp;quot;submission25086&amp;quot;:95,&amp;quot;submission25097&amp;quot;:90,&amp;quot;submission25098&amp;quot;:85,&amp;quot;submission25102&amp;quot;:97,&amp;quot;submission25103&amp;quot;:94,&amp;quot;submission25105&amp;quot;:98,&amp;quot;submission25114&amp;quot;:95,&amp;quot;submission25115&amp;quot;:94},')&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_quiz_scores&lt;br /&gt;
  # It gets the assignment id list and generates the json on quiz scores of&lt;br /&gt;
  # those assignments.&lt;br /&gt;
  # Finally processes quiz string is prepended to the request body, received&lt;br /&gt;
  # as an argument, and returns the body to prepare_request_body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_quiz_scores(body)&lt;br /&gt;
    flash[:additional_info] = 'add quiz scores'&lt;br /&gt;
    assignment_id_list_quiz = get_assignment_id_list(params[:assignment_id].to_i, params[:another_assignment_id].to_i)&lt;br /&gt;
    quiz_str =  generate_json_for_quiz_scores(assignment_id_list_quiz).to_json&lt;br /&gt;
    quiz_str[0] = '' # remove first {&lt;br /&gt;
    quiz_str.prepend('&amp;quot;quiz_scores&amp;quot;:{') # add quiz_scores tag&lt;br /&gt;
    quiz_str += ','&lt;br /&gt;
    quiz_str = quiz_str.gsub('&amp;quot;N/A&amp;quot;', '20.0') # replace N/A values with 20&lt;br /&gt;
    body.prepend(quiz_str)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_hamer_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial hamer reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_lauw_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial lauw reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_assignment_id_list&lt;br /&gt;
  # This method on receipt of individual assignment IDs returns a list with all&lt;br /&gt;
  # the assignment IDs appended into a data structure&lt;br /&gt;
  # This function accepts 2 arguments, with the second argument being optional,&lt;br /&gt;
  # and returns the list assignment_id_list&lt;br /&gt;
  # If the second argument is 0, it is not appended to the list.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_one: first assignment id (required)&lt;br /&gt;
  #   assignment_id_two: second assignment id (optional)&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   assignment_id_list: list containing two assignment ids&lt;br /&gt;
  def get_assignment_id_list(assignment_id_one, assignment_id_two = 0)&lt;br /&gt;
    assignment_id_list = []&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_one&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_two unless assignment_id_two.zero?&lt;br /&gt;
    assignment_id_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_flash_messages&lt;br /&gt;
  # This method sets the flash messages to pass on to the next request i.e&lt;br /&gt;
  # the request redirected to the client&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_flash_messages(post_req)&lt;br /&gt;
    flash[:assignment_id] = params[:assignment_id]&lt;br /&gt;
    flash[:round_num] = params[:round_num]&lt;br /&gt;
    flash[:algorithm] = params[:algorithm]&lt;br /&gt;
    flash[:another_assignment_id] = params[:another_assignment_id]&lt;br /&gt;
    flash[:request_body] = post_req.body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_additional_info_details&lt;br /&gt;
  # This method sets the additional info details based on the options&lt;br /&gt;
  # selected in the additional information section. We populate the request&lt;br /&gt;
  # based on the selections&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_additional_info_details(post_req)&lt;br /&gt;
    if params[:checkbox][:expert_grade] == 'Add expert grades'&lt;br /&gt;
      add_expert_grades(post_req.body)&lt;br /&gt;
    elsif params[:checkbox][:hamer] == 'Add initial Hamer reputation values'&lt;br /&gt;
      add_hamer_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:lauw] == 'Add initial Lauw reputation values'&lt;br /&gt;
      add_lauw_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:quiz] == 'Add quiz scores'&lt;br /&gt;
      add_quiz_scores(post_req.body)&lt;br /&gt;
    else&lt;br /&gt;
      flash[:additional_info] = ''&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: prepare_request_body&lt;br /&gt;
  # This method is responsible for preparing the request body in a proper format&lt;br /&gt;
  # to send to the server. It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # It finally sends the prepared request body back to the send_post_request&lt;br /&gt;
  # method.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def prepare_request_body&lt;br /&gt;
    reputation_web_service_path = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).path&lt;br /&gt;
    post_req = Net::HTTP::Post.new(reputation_web_service_path, { 'Content-Type' =&amp;gt; 'application/json', 'charset' =&amp;gt; 'utf-8' })&lt;br /&gt;
    curr_assignment_id = (params[:assignment_id].empty? ? '754' : params[:assignment_id])&lt;br /&gt;
    assignment_id_list_peers = get_assignment_id_list(curr_assignment_id, params[:another_assignment_id].to_i)&lt;br /&gt;
&lt;br /&gt;
    post_req.body = generate_json_for_peer_reviews(assignment_id_list_peers, params[:round_num].to_i).to_json&lt;br /&gt;
&lt;br /&gt;
    post_req.body[0] = '' # remove the first '{'&lt;br /&gt;
    add_additional_info_details post_req&lt;br /&gt;
    post_req.body.prepend('{')&lt;br /&gt;
    add_flash_messages post_req&lt;br /&gt;
    post_req&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: send_post_request&lt;br /&gt;
  # This method calls the prepare_request_body function to get a prepared&lt;br /&gt;
  # request body in proper format to send to the server.&lt;br /&gt;
  # It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # We redirect to the client url to display the results.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def send_post_request&lt;br /&gt;
    post_req = prepare_request_body&lt;br /&gt;
    reputation_web_service_hostname = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).host&lt;br /&gt;
    reputation_response = Net::HTTP.new(reputation_web_service_hostname).start { |http| http.request(post_req) }&lt;br /&gt;
    if %w[400 500].include?(reputation_response.code)&lt;br /&gt;
      flash[:error] = 'Post Request Failed'&lt;br /&gt;
    else&lt;br /&gt;
      process_response_body(reputation_response)&lt;br /&gt;
    end&lt;br /&gt;
    redirect_to action: 'client'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def calculate_reputation_Score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 4: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
As we can see the results are skewed towards lower values. This is because we added the inclusion of nil values and assumed them to be zero. Because of this assumption, the scores are skewed towards lower values. It must be considered median/random to have lesser skew.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
Upon testing the original reputation web server's Hamer values, we found discrepancies between the expected values and the values returned by the web service. This led us to conclude that the webservice implementation was incorrect.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Results_hamer_peerlogic.jpg&amp;diff=154658</id>
		<title>File:Results hamer peerlogic.jpg</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Results_hamer_peerlogic.jpg&amp;diff=154658"/>
		<updated>2024-03-25T04:21:34Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154657</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154657"/>
		<updated>2024-03-25T04:15:27Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* GitHub Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Verify the correctness of the reputation web server's Hamer values ==&lt;br /&gt;
&lt;br /&gt;
We test the original reputation web server's algorithm with our scenarios and verify if they match the expected values. The peerlogic server can be accessed via API calls to URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
It uses two algorithms: The hamer-peer algorithm and the Lauw-peer algorithm. Our scope for this project is to test Hamer values as it is already established in previous work that the Hamer algorithm suits our use case better.&lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe &amp;quot;Expertiza&amp;quot; do&lt;br /&gt;
    it &amp;quot;should return the correct Hamer calculation&amp;quot; do&lt;br /&gt;
        uri = URI('http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms')&lt;br /&gt;
    &lt;br /&gt;
        response = Net::HTTP.post(uri, INPUTS, 'Content-Type' =&amp;gt; 'application/json')&lt;br /&gt;
    &lt;br /&gt;
        expect(JSON.parse(response.body)[&amp;quot;Hamer&amp;quot;]).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results????? ===&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
We can see here that the values returned by this webservice don't match the expected values. Hence, we conclude that the webservice is not implemented correctly.&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As seen above, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'json'&lt;br /&gt;
require 'uri'&lt;br /&gt;
require 'net/http'&lt;br /&gt;
require 'openssl'&lt;br /&gt;
require 'base64'&lt;br /&gt;
&lt;br /&gt;
# Expertiza allows student work to be peer-reviewed, since peers can provide&lt;br /&gt;
# more feedback than the instructor can.&lt;br /&gt;
# However, if we want to assure that all students receive competent feedback,&lt;br /&gt;
# or even use peer-assigned grades,&lt;br /&gt;
# we need a way to judge which peer reviewers are most credible. The solution&lt;br /&gt;
# is the reputation system.&lt;br /&gt;
# Reputation systems have been deployed as web services, peer-review&lt;br /&gt;
# researchers will be able to use them to calculate scores on assignments,&lt;br /&gt;
# both past and present (past data can be used to tune the algorithms).&lt;br /&gt;
#&lt;br /&gt;
# This file is the controller to calculate the reputation scores.&lt;br /&gt;
# A 'reputation' measures how close a reviewer's scores are to other reviewers'&lt;br /&gt;
# scores.&lt;br /&gt;
# This controller implements the calculation of reputation scores.&lt;br /&gt;
class ReputationWebServiceController &amp;lt; ApplicationController&lt;br /&gt;
  include AuthorizationHelper&lt;br /&gt;
&lt;br /&gt;
  # Method: action_allowed&lt;br /&gt;
  # This method checks if the currently authenticated user has the authorization&lt;br /&gt;
  # to perform certain actions&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   true if the user has privileges to perform the action else returns false&lt;br /&gt;
  def action_allowed?&lt;br /&gt;
    current_user_has_ta_privileges?&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_max_question_score&lt;br /&gt;
  # This method receives a set of answers and gets the maximum question score&lt;br /&gt;
  # Params&lt;br /&gt;
  #   answers: set of answers&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   if no error returns max_question_score of first question else 1&lt;br /&gt;
  def get_max_question_score(answers)&lt;br /&gt;
    begin&lt;br /&gt;
      answers.first.question.questionnaire.max_question_score&lt;br /&gt;
    rescue StandardError&lt;br /&gt;
      1&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_valid_answers_for_response&lt;br /&gt;
  # This method receives response and filters the valid answers list of the&lt;br /&gt;
  # response ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   set of valid answers (returns nil if empty)&lt;br /&gt;
  def get_valid_answers_for_response(response)&lt;br /&gt;
    answers = Answer.where(response_id: response.id)&lt;br /&gt;
    valid_answer = answers.select { |answer| (answer.question.type == 'Criterion') &amp;amp;&amp;amp; !answer.answer.nil? }&lt;br /&gt;
    valid_answer.empty? ? nil : valid_answer&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: calculate_peer_review_grade&lt;br /&gt;
  # This method calculates a cumulative review grade with respect to the set of valid answers&lt;br /&gt;
  # Params&lt;br /&gt;
  #   valid_answer: valid answer to get weight of the answer's question&lt;br /&gt;
  #   max_question_score: used to calculate maximum score for peer review grade&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grade&lt;br /&gt;
  def calculate_peer_review_grade(valid_answer, max_question_score)&lt;br /&gt;
    weighted_score_sum = valid_answer.map { |answer| answer.answer * answer.question.weight }.inject(:+)&lt;br /&gt;
    question_weight_sum = valid_answer.sum { |answer| answer.question.weight }&lt;br /&gt;
    peer_review_grade = 100.0 * weighted_score_sum / (question_weight_sum * max_question_score)&lt;br /&gt;
    peer_review_grade.round(4)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews_for_responses&lt;br /&gt;
  # This method calculates the peer review grade for each valid response&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reviewer_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   team_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   valid_response: to get the valid answer for each valid response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grades_list&lt;br /&gt;
  def get_peer_reviews_for_responses(reviewer_id, team_id, valid_response)&lt;br /&gt;
    peer_review_grades_list = []&lt;br /&gt;
    valid_response.each do |response|&lt;br /&gt;
      valid_answer = get_valid_answers_for_response(response)&lt;br /&gt;
      next if valid_answer.nil?&lt;br /&gt;
&lt;br /&gt;
      review_grade = calculate_peer_review_grade(valid_answer, get_max_question_score(valid_answer))&lt;br /&gt;
      peer_review_grades_list &amp;lt;&amp;lt; [reviewer_id, team_id, review_grade]&lt;br /&gt;
    end&lt;br /&gt;
    peer_review_grades_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews&lt;br /&gt;
  # This method retrieves all the reviews for the submissions&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: used to retrieve response map&lt;br /&gt;
  #   round_num: used to retrieve round_num for the valid response&lt;br /&gt;
  #   has_topic: to get the topic condition&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which corresponds to the return of&lt;br /&gt;
  #     get_peer_reviews_for_responses method and appended to the raw_data_array&lt;br /&gt;
  def get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    ReviewResponseMap.where('reviewed_object_id in (?) and calibrate_to = ?', assignment_id_list, false).each do |response_map|&lt;br /&gt;
      reviewer = response_map.reviewer.user&lt;br /&gt;
      team = AssignmentTeam.find(response_map.reviewee_id)&lt;br /&gt;
      topic_condition = ((has_topic &amp;amp;&amp;amp; (SignedUpTeam.where(team_id: team.id).first.is_waitlisted == false)) || !has_topic)&lt;br /&gt;
      last_valid_response = response_map.response.select { |r| r.round == round_num }.max&lt;br /&gt;
      valid_response = [last_valid_response] unless last_valid_response.nil?&lt;br /&gt;
      if (topic_condition == true) &amp;amp;&amp;amp; !valid_response.nil? &amp;amp;&amp;amp; !valid_response.empty?&lt;br /&gt;
        raw_data_array += get_peer_reviews_for_responses(reviewer.id, team.id, valid_response)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_ids_list&lt;br /&gt;
  # This method maps each object to the corresponding object's ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   tables: any table&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   id in the tables&lt;br /&gt;
  def get_ids_list(tables)&lt;br /&gt;
    tables.map(&amp;amp;:id)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_scores&lt;br /&gt;
  # This method gets the quiz score of each participant for respective reviewee&lt;br /&gt;
  # Params&lt;br /&gt;
  #   team_ids: list of team IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which is a list of participant, reviewee and the participant's quiz score&lt;br /&gt;
  def get_scores(team_ids)&lt;br /&gt;
    quiz_questionnnaires = QuizQuestionnaire.where('instructor_id in (?)', team_ids)&lt;br /&gt;
    quiz_questionnnaire_ids = get_ids_list(quiz_questionnnaires)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    QuizResponseMap.where('reviewed_object_id in (?)', quiz_questionnnaire_ids).each do |response_map|&lt;br /&gt;
      quiz_score = response_map.quiz_score&lt;br /&gt;
      participant = Participant.find(response_map.reviewer_id)&lt;br /&gt;
      raw_data_array &amp;lt;&amp;lt; [participant.user_id, response_map.reviewee_id, quiz_score]&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_quiz_score&lt;br /&gt;
  # This method gets the quiz score of assignments&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: returned by get_scores method, which is a list of participant,&lt;br /&gt;
  #     reviewee and the participant's quiz score&lt;br /&gt;
  def get_quiz_score(assignment_id_list)&lt;br /&gt;
    teams = AssignmentTeam.where('parent_id in (?)', assignment_id_list)&lt;br /&gt;
    team_ids = get_ids_list(teams)&lt;br /&gt;
    get_scores(team_ids)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_body&lt;br /&gt;
  # This method generates json body for the peer reviews and quiz scores&lt;br /&gt;
  # Params&lt;br /&gt;
  #   results: list of grades with corresponding team/participant ID,&lt;br /&gt;
  #     reviewee ID and their score&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: returns the formatted body after sorting the hash&lt;br /&gt;
  def generate_json_body(results)&lt;br /&gt;
    request_body = {}&lt;br /&gt;
    results.each_with_index do |record, _index|&lt;br /&gt;
      request_body['submission' + record[1].to_s] = {} unless request_body.key?('submission' + record[1].to_s)&lt;br /&gt;
      request_body['submission' + record[1].to_s]['stu' + record[0].to_s] = record[2]&lt;br /&gt;
    end&lt;br /&gt;
    # sort the 2-dimension hash&lt;br /&gt;
    request_body.each { |k, v| request_body[k] = v.sort.to_h }&lt;br /&gt;
    request_body.sort.to_h&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_peer_reviews&lt;br /&gt;
  # This method retrieves all the peer reviews associated with&lt;br /&gt;
  # the assignment id list by calling the get_peer_reviews method.&lt;br /&gt;
  # It then formats the peer-review list in JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  #   round_num: round number of the review&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with the formatted peer review data.&lt;br /&gt;
  def generate_json_for_peer_reviews(assignment_id_list, round_num = 2)&lt;br /&gt;
    has_topic = !SignUpTopic.where(assignment_id: assignment_id_list[0]).empty?&lt;br /&gt;
&lt;br /&gt;
    peer_reviews_list = get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    request_body = generate_json_body(peer_reviews_list)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_quiz_scores&lt;br /&gt;
  # This method accepts a list of assignment ids as an argument.&lt;br /&gt;
  # It then calls the get_quiz_score method on the list to get&lt;br /&gt;
  # maps of teams and scores for the given assignments.&lt;br /&gt;
  # The map is then formatted into JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with quiz scores&lt;br /&gt;
  def generate_json_for_quiz_scores(assignment_id_list)&lt;br /&gt;
    participant_reviewee_map = get_quiz_score(assignment_id_list)&lt;br /&gt;
    request_body = generate_json_body(participant_reviewee_map)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: client&lt;br /&gt;
  # This method is called when the url reputation_web_service/client&lt;br /&gt;
  # is hit using GET method.&lt;br /&gt;
  # This renders the client.html.erb&lt;br /&gt;
  # It also populates the instance variables to be used in the views&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def client&lt;br /&gt;
    @max_assignment_id = Assignment.last.id&lt;br /&gt;
    @assignment = Assignment.find(flash[:assignment_id]) rescue nil&lt;br /&gt;
    @another_assignment = Assignment.find(flash[:another_assignment_id]) rescue nil&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: update_participants_reputation&lt;br /&gt;
  # This method accepts the response body in the JSON format.&lt;br /&gt;
  # It then parses the JSON and updates the reputation scores of the&lt;br /&gt;
  # participants in the list.&lt;br /&gt;
  # If the alg variable is not  Hamer/ Lauv, the updation step is skipped.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def update_participants_reputation(reputation_response)&lt;br /&gt;
    JSON.parse(reputation_response.body.to_s).each do |reputation_algorithm, user_resputation_list|&lt;br /&gt;
      next unless %w[Hamer Lauw].include?(reputation_algorithm)&lt;br /&gt;
&lt;br /&gt;
      user_resputation_list.each do |user_id, reputation|&lt;br /&gt;
        Participant.find_by(user_id: user_id).update(reputation_algorithm.to_sym =&amp;gt; reputation) unless /leniency/ =~ user_id.to_s&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: process_response_body&lt;br /&gt;
  # This method gets the control after receiving a response from the server.&lt;br /&gt;
  # It receives the response body as an argument&lt;br /&gt;
  # It updates the instance variables related to the response.&lt;br /&gt;
  # It then calls the update_participants_reputation to update the reputation&lt;br /&gt;
  # scores received in the response body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def process_response_body(reputation_response)&lt;br /&gt;
    flash[:response] = reputation_response&lt;br /&gt;
    flash[:response_body] = reputation_response.body&lt;br /&gt;
    update_participants_reputation(reputation_response)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_expert_grades&lt;br /&gt;
  # It prepends the request body with the expert grades pertaining&lt;br /&gt;
  # to the default wiki contribution case of 754.&lt;br /&gt;
  # It receives the request body as an argument and prepends it&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_expert_grades(body)&lt;br /&gt;
    flash[:additional_info] = 'add expert grades'&lt;br /&gt;
    case params[:assignment_id]&lt;br /&gt;
    when '754' # expert grades of Wiki contribution (754)&lt;br /&gt;
      body.prepend('&amp;quot;expert_grades&amp;quot;: {&amp;quot;submission25030&amp;quot;:95,&amp;quot;submission25031&amp;quot;:92,&amp;quot;submission25033&amp;quot;:88,&amp;quot;submission25034&amp;quot;:98,&amp;quot;submission25035&amp;quot;:100,&amp;quot;submission25037&amp;quot;:95,&amp;quot;submission25038&amp;quot;:95,&amp;quot;submission25039&amp;quot;:93,&amp;quot;submission25040&amp;quot;:96,&amp;quot;submission25041&amp;quot;:90,&amp;quot;submission25042&amp;quot;:100,&amp;quot;submission25046&amp;quot;:95,&amp;quot;submission25049&amp;quot;:90,&amp;quot;submission25050&amp;quot;:88,&amp;quot;submission25053&amp;quot;:91,&amp;quot;submission25054&amp;quot;:96,&amp;quot;submission25055&amp;quot;:94,&amp;quot;submission25059&amp;quot;:96,&amp;quot;submission25071&amp;quot;:85,&amp;quot;submission25082&amp;quot;:100,&amp;quot;submission25086&amp;quot;:95,&amp;quot;submission25097&amp;quot;:90,&amp;quot;submission25098&amp;quot;:85,&amp;quot;submission25102&amp;quot;:97,&amp;quot;submission25103&amp;quot;:94,&amp;quot;submission25105&amp;quot;:98,&amp;quot;submission25114&amp;quot;:95,&amp;quot;submission25115&amp;quot;:94},')&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_quiz_scores&lt;br /&gt;
  # It gets the assignment id list and generates the json on quiz scores of&lt;br /&gt;
  # those assignments.&lt;br /&gt;
  # Finally processes quiz string is prepended to the request body, received&lt;br /&gt;
  # as an argument, and returns the body to prepare_request_body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_quiz_scores(body)&lt;br /&gt;
    flash[:additional_info] = 'add quiz scores'&lt;br /&gt;
    assignment_id_list_quiz = get_assignment_id_list(params[:assignment_id].to_i, params[:another_assignment_id].to_i)&lt;br /&gt;
    quiz_str =  generate_json_for_quiz_scores(assignment_id_list_quiz).to_json&lt;br /&gt;
    quiz_str[0] = '' # remove first {&lt;br /&gt;
    quiz_str.prepend('&amp;quot;quiz_scores&amp;quot;:{') # add quiz_scores tag&lt;br /&gt;
    quiz_str += ','&lt;br /&gt;
    quiz_str = quiz_str.gsub('&amp;quot;N/A&amp;quot;', '20.0') # replace N/A values with 20&lt;br /&gt;
    body.prepend(quiz_str)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_hamer_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial hamer reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_lauw_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial lauw reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_assignment_id_list&lt;br /&gt;
  # This method on receipt of individual assignment IDs returns a list with all&lt;br /&gt;
  # the assignment IDs appended into a data structure&lt;br /&gt;
  # This function accepts 2 arguments, with the second argument being optional,&lt;br /&gt;
  # and returns the list assignment_id_list&lt;br /&gt;
  # If the second argument is 0, it is not appended to the list.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_one: first assignment id (required)&lt;br /&gt;
  #   assignment_id_two: second assignment id (optional)&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   assignment_id_list: list containing two assignment ids&lt;br /&gt;
  def get_assignment_id_list(assignment_id_one, assignment_id_two = 0)&lt;br /&gt;
    assignment_id_list = []&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_one&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_two unless assignment_id_two.zero?&lt;br /&gt;
    assignment_id_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_flash_messages&lt;br /&gt;
  # This method sets the flash messages to pass on to the next request i.e&lt;br /&gt;
  # the request redirected to the client&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_flash_messages(post_req)&lt;br /&gt;
    flash[:assignment_id] = params[:assignment_id]&lt;br /&gt;
    flash[:round_num] = params[:round_num]&lt;br /&gt;
    flash[:algorithm] = params[:algorithm]&lt;br /&gt;
    flash[:another_assignment_id] = params[:another_assignment_id]&lt;br /&gt;
    flash[:request_body] = post_req.body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_additional_info_details&lt;br /&gt;
  # This method sets the additional info details based on the options&lt;br /&gt;
  # selected in the additional information section. We populate the request&lt;br /&gt;
  # based on the selections&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_additional_info_details(post_req)&lt;br /&gt;
    if params[:checkbox][:expert_grade] == 'Add expert grades'&lt;br /&gt;
      add_expert_grades(post_req.body)&lt;br /&gt;
    elsif params[:checkbox][:hamer] == 'Add initial Hamer reputation values'&lt;br /&gt;
      add_hamer_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:lauw] == 'Add initial Lauw reputation values'&lt;br /&gt;
      add_lauw_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:quiz] == 'Add quiz scores'&lt;br /&gt;
      add_quiz_scores(post_req.body)&lt;br /&gt;
    else&lt;br /&gt;
      flash[:additional_info] = ''&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: prepare_request_body&lt;br /&gt;
  # This method is responsible for preparing the request body in a proper format&lt;br /&gt;
  # to send to the server. It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # It finally sends the prepared request body back to the send_post_request&lt;br /&gt;
  # method.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def prepare_request_body&lt;br /&gt;
    reputation_web_service_path = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).path&lt;br /&gt;
    post_req = Net::HTTP::Post.new(reputation_web_service_path, { 'Content-Type' =&amp;gt; 'application/json', 'charset' =&amp;gt; 'utf-8' })&lt;br /&gt;
    curr_assignment_id = (params[:assignment_id].empty? ? '754' : params[:assignment_id])&lt;br /&gt;
    assignment_id_list_peers = get_assignment_id_list(curr_assignment_id, params[:another_assignment_id].to_i)&lt;br /&gt;
&lt;br /&gt;
    post_req.body = generate_json_for_peer_reviews(assignment_id_list_peers, params[:round_num].to_i).to_json&lt;br /&gt;
&lt;br /&gt;
    post_req.body[0] = '' # remove the first '{'&lt;br /&gt;
    add_additional_info_details post_req&lt;br /&gt;
    post_req.body.prepend('{')&lt;br /&gt;
    add_flash_messages post_req&lt;br /&gt;
    post_req&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: send_post_request&lt;br /&gt;
  # This method calls the prepare_request_body function to get a prepared&lt;br /&gt;
  # request body in proper format to send to the server.&lt;br /&gt;
  # It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # We redirect to the client url to display the results.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def send_post_request&lt;br /&gt;
    post_req = prepare_request_body&lt;br /&gt;
    reputation_web_service_hostname = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).host&lt;br /&gt;
    reputation_response = Net::HTTP.new(reputation_web_service_hostname).start { |http| http.request(post_req) }&lt;br /&gt;
    if %w[400 500].include?(reputation_response.code)&lt;br /&gt;
      flash[:error] = 'Post Request Failed'&lt;br /&gt;
    else&lt;br /&gt;
      process_response_body(reputation_response)&lt;br /&gt;
    end&lt;br /&gt;
    redirect_to action: 'client'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def calculate_reputation_Score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 4: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
As we can see the results are skewed towards lower values. This is because we added the inclusion of nil values and assumed them to be zero. Because of this assumption, the scores are skewed towards lower values. It must be considered median/random to have lesser skew.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
Upon testing the original reputation web server's Hamer values, we found discrepancies between the expected values and the values returned by the web service. This led us to conclude that the webservice implementation was incorrect.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1 here]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154656</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154656"/>
		<updated>2024-03-25T04:15:15Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* GitHub Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Verify the correctness of the reputation web server's Hamer values ==&lt;br /&gt;
&lt;br /&gt;
We test the original reputation web server's algorithm with our scenarios and verify if they match the expected values. The peerlogic server can be accessed via API calls to URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
It uses two algorithms: The hamer-peer algorithm and the Lauw-peer algorithm. Our scope for this project is to test Hamer values as it is already established in previous work that the Hamer algorithm suits our use case better.&lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe &amp;quot;Expertiza&amp;quot; do&lt;br /&gt;
    it &amp;quot;should return the correct Hamer calculation&amp;quot; do&lt;br /&gt;
        uri = URI('http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms')&lt;br /&gt;
    &lt;br /&gt;
        response = Net::HTTP.post(uri, INPUTS, 'Content-Type' =&amp;gt; 'application/json')&lt;br /&gt;
    &lt;br /&gt;
        expect(JSON.parse(response.body)[&amp;quot;Hamer&amp;quot;]).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results????? ===&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
We can see here that the values returned by this webservice don't match the expected values. Hence, we conclude that the webservice is not implemented correctly.&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As seen above, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'json'&lt;br /&gt;
require 'uri'&lt;br /&gt;
require 'net/http'&lt;br /&gt;
require 'openssl'&lt;br /&gt;
require 'base64'&lt;br /&gt;
&lt;br /&gt;
# Expertiza allows student work to be peer-reviewed, since peers can provide&lt;br /&gt;
# more feedback than the instructor can.&lt;br /&gt;
# However, if we want to assure that all students receive competent feedback,&lt;br /&gt;
# or even use peer-assigned grades,&lt;br /&gt;
# we need a way to judge which peer reviewers are most credible. The solution&lt;br /&gt;
# is the reputation system.&lt;br /&gt;
# Reputation systems have been deployed as web services, peer-review&lt;br /&gt;
# researchers will be able to use them to calculate scores on assignments,&lt;br /&gt;
# both past and present (past data can be used to tune the algorithms).&lt;br /&gt;
#&lt;br /&gt;
# This file is the controller to calculate the reputation scores.&lt;br /&gt;
# A 'reputation' measures how close a reviewer's scores are to other reviewers'&lt;br /&gt;
# scores.&lt;br /&gt;
# This controller implements the calculation of reputation scores.&lt;br /&gt;
class ReputationWebServiceController &amp;lt; ApplicationController&lt;br /&gt;
  include AuthorizationHelper&lt;br /&gt;
&lt;br /&gt;
  # Method: action_allowed&lt;br /&gt;
  # This method checks if the currently authenticated user has the authorization&lt;br /&gt;
  # to perform certain actions&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   true if the user has privileges to perform the action else returns false&lt;br /&gt;
  def action_allowed?&lt;br /&gt;
    current_user_has_ta_privileges?&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_max_question_score&lt;br /&gt;
  # This method receives a set of answers and gets the maximum question score&lt;br /&gt;
  # Params&lt;br /&gt;
  #   answers: set of answers&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   if no error returns max_question_score of first question else 1&lt;br /&gt;
  def get_max_question_score(answers)&lt;br /&gt;
    begin&lt;br /&gt;
      answers.first.question.questionnaire.max_question_score&lt;br /&gt;
    rescue StandardError&lt;br /&gt;
      1&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_valid_answers_for_response&lt;br /&gt;
  # This method receives response and filters the valid answers list of the&lt;br /&gt;
  # response ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   set of valid answers (returns nil if empty)&lt;br /&gt;
  def get_valid_answers_for_response(response)&lt;br /&gt;
    answers = Answer.where(response_id: response.id)&lt;br /&gt;
    valid_answer = answers.select { |answer| (answer.question.type == 'Criterion') &amp;amp;&amp;amp; !answer.answer.nil? }&lt;br /&gt;
    valid_answer.empty? ? nil : valid_answer&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: calculate_peer_review_grade&lt;br /&gt;
  # This method calculates a cumulative review grade with respect to the set of valid answers&lt;br /&gt;
  # Params&lt;br /&gt;
  #   valid_answer: valid answer to get weight of the answer's question&lt;br /&gt;
  #   max_question_score: used to calculate maximum score for peer review grade&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grade&lt;br /&gt;
  def calculate_peer_review_grade(valid_answer, max_question_score)&lt;br /&gt;
    weighted_score_sum = valid_answer.map { |answer| answer.answer * answer.question.weight }.inject(:+)&lt;br /&gt;
    question_weight_sum = valid_answer.sum { |answer| answer.question.weight }&lt;br /&gt;
    peer_review_grade = 100.0 * weighted_score_sum / (question_weight_sum * max_question_score)&lt;br /&gt;
    peer_review_grade.round(4)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews_for_responses&lt;br /&gt;
  # This method calculates the peer review grade for each valid response&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reviewer_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   team_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   valid_response: to get the valid answer for each valid response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grades_list&lt;br /&gt;
  def get_peer_reviews_for_responses(reviewer_id, team_id, valid_response)&lt;br /&gt;
    peer_review_grades_list = []&lt;br /&gt;
    valid_response.each do |response|&lt;br /&gt;
      valid_answer = get_valid_answers_for_response(response)&lt;br /&gt;
      next if valid_answer.nil?&lt;br /&gt;
&lt;br /&gt;
      review_grade = calculate_peer_review_grade(valid_answer, get_max_question_score(valid_answer))&lt;br /&gt;
      peer_review_grades_list &amp;lt;&amp;lt; [reviewer_id, team_id, review_grade]&lt;br /&gt;
    end&lt;br /&gt;
    peer_review_grades_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews&lt;br /&gt;
  # This method retrieves all the reviews for the submissions&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: used to retrieve response map&lt;br /&gt;
  #   round_num: used to retrieve round_num for the valid response&lt;br /&gt;
  #   has_topic: to get the topic condition&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which corresponds to the return of&lt;br /&gt;
  #     get_peer_reviews_for_responses method and appended to the raw_data_array&lt;br /&gt;
  def get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    ReviewResponseMap.where('reviewed_object_id in (?) and calibrate_to = ?', assignment_id_list, false).each do |response_map|&lt;br /&gt;
      reviewer = response_map.reviewer.user&lt;br /&gt;
      team = AssignmentTeam.find(response_map.reviewee_id)&lt;br /&gt;
      topic_condition = ((has_topic &amp;amp;&amp;amp; (SignedUpTeam.where(team_id: team.id).first.is_waitlisted == false)) || !has_topic)&lt;br /&gt;
      last_valid_response = response_map.response.select { |r| r.round == round_num }.max&lt;br /&gt;
      valid_response = [last_valid_response] unless last_valid_response.nil?&lt;br /&gt;
      if (topic_condition == true) &amp;amp;&amp;amp; !valid_response.nil? &amp;amp;&amp;amp; !valid_response.empty?&lt;br /&gt;
        raw_data_array += get_peer_reviews_for_responses(reviewer.id, team.id, valid_response)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_ids_list&lt;br /&gt;
  # This method maps each object to the corresponding object's ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   tables: any table&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   id in the tables&lt;br /&gt;
  def get_ids_list(tables)&lt;br /&gt;
    tables.map(&amp;amp;:id)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_scores&lt;br /&gt;
  # This method gets the quiz score of each participant for respective reviewee&lt;br /&gt;
  # Params&lt;br /&gt;
  #   team_ids: list of team IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which is a list of participant, reviewee and the participant's quiz score&lt;br /&gt;
  def get_scores(team_ids)&lt;br /&gt;
    quiz_questionnnaires = QuizQuestionnaire.where('instructor_id in (?)', team_ids)&lt;br /&gt;
    quiz_questionnnaire_ids = get_ids_list(quiz_questionnnaires)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    QuizResponseMap.where('reviewed_object_id in (?)', quiz_questionnnaire_ids).each do |response_map|&lt;br /&gt;
      quiz_score = response_map.quiz_score&lt;br /&gt;
      participant = Participant.find(response_map.reviewer_id)&lt;br /&gt;
      raw_data_array &amp;lt;&amp;lt; [participant.user_id, response_map.reviewee_id, quiz_score]&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_quiz_score&lt;br /&gt;
  # This method gets the quiz score of assignments&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: returned by get_scores method, which is a list of participant,&lt;br /&gt;
  #     reviewee and the participant's quiz score&lt;br /&gt;
  def get_quiz_score(assignment_id_list)&lt;br /&gt;
    teams = AssignmentTeam.where('parent_id in (?)', assignment_id_list)&lt;br /&gt;
    team_ids = get_ids_list(teams)&lt;br /&gt;
    get_scores(team_ids)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_body&lt;br /&gt;
  # This method generates json body for the peer reviews and quiz scores&lt;br /&gt;
  # Params&lt;br /&gt;
  #   results: list of grades with corresponding team/participant ID,&lt;br /&gt;
  #     reviewee ID and their score&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: returns the formatted body after sorting the hash&lt;br /&gt;
  def generate_json_body(results)&lt;br /&gt;
    request_body = {}&lt;br /&gt;
    results.each_with_index do |record, _index|&lt;br /&gt;
      request_body['submission' + record[1].to_s] = {} unless request_body.key?('submission' + record[1].to_s)&lt;br /&gt;
      request_body['submission' + record[1].to_s]['stu' + record[0].to_s] = record[2]&lt;br /&gt;
    end&lt;br /&gt;
    # sort the 2-dimension hash&lt;br /&gt;
    request_body.each { |k, v| request_body[k] = v.sort.to_h }&lt;br /&gt;
    request_body.sort.to_h&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_peer_reviews&lt;br /&gt;
  # This method retrieves all the peer reviews associated with&lt;br /&gt;
  # the assignment id list by calling the get_peer_reviews method.&lt;br /&gt;
  # It then formats the peer-review list in JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  #   round_num: round number of the review&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with the formatted peer review data.&lt;br /&gt;
  def generate_json_for_peer_reviews(assignment_id_list, round_num = 2)&lt;br /&gt;
    has_topic = !SignUpTopic.where(assignment_id: assignment_id_list[0]).empty?&lt;br /&gt;
&lt;br /&gt;
    peer_reviews_list = get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    request_body = generate_json_body(peer_reviews_list)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_quiz_scores&lt;br /&gt;
  # This method accepts a list of assignment ids as an argument.&lt;br /&gt;
  # It then calls the get_quiz_score method on the list to get&lt;br /&gt;
  # maps of teams and scores for the given assignments.&lt;br /&gt;
  # The map is then formatted into JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with quiz scores&lt;br /&gt;
  def generate_json_for_quiz_scores(assignment_id_list)&lt;br /&gt;
    participant_reviewee_map = get_quiz_score(assignment_id_list)&lt;br /&gt;
    request_body = generate_json_body(participant_reviewee_map)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: client&lt;br /&gt;
  # This method is called when the url reputation_web_service/client&lt;br /&gt;
  # is hit using GET method.&lt;br /&gt;
  # This renders the client.html.erb&lt;br /&gt;
  # It also populates the instance variables to be used in the views&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def client&lt;br /&gt;
    @max_assignment_id = Assignment.last.id&lt;br /&gt;
    @assignment = Assignment.find(flash[:assignment_id]) rescue nil&lt;br /&gt;
    @another_assignment = Assignment.find(flash[:another_assignment_id]) rescue nil&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: update_participants_reputation&lt;br /&gt;
  # This method accepts the response body in the JSON format.&lt;br /&gt;
  # It then parses the JSON and updates the reputation scores of the&lt;br /&gt;
  # participants in the list.&lt;br /&gt;
  # If the alg variable is not  Hamer/ Lauv, the updation step is skipped.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def update_participants_reputation(reputation_response)&lt;br /&gt;
    JSON.parse(reputation_response.body.to_s).each do |reputation_algorithm, user_resputation_list|&lt;br /&gt;
      next unless %w[Hamer Lauw].include?(reputation_algorithm)&lt;br /&gt;
&lt;br /&gt;
      user_resputation_list.each do |user_id, reputation|&lt;br /&gt;
        Participant.find_by(user_id: user_id).update(reputation_algorithm.to_sym =&amp;gt; reputation) unless /leniency/ =~ user_id.to_s&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: process_response_body&lt;br /&gt;
  # This method gets the control after receiving a response from the server.&lt;br /&gt;
  # It receives the response body as an argument&lt;br /&gt;
  # It updates the instance variables related to the response.&lt;br /&gt;
  # It then calls the update_participants_reputation to update the reputation&lt;br /&gt;
  # scores received in the response body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def process_response_body(reputation_response)&lt;br /&gt;
    flash[:response] = reputation_response&lt;br /&gt;
    flash[:response_body] = reputation_response.body&lt;br /&gt;
    update_participants_reputation(reputation_response)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_expert_grades&lt;br /&gt;
  # It prepends the request body with the expert grades pertaining&lt;br /&gt;
  # to the default wiki contribution case of 754.&lt;br /&gt;
  # It receives the request body as an argument and prepends it&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_expert_grades(body)&lt;br /&gt;
    flash[:additional_info] = 'add expert grades'&lt;br /&gt;
    case params[:assignment_id]&lt;br /&gt;
    when '754' # expert grades of Wiki contribution (754)&lt;br /&gt;
      body.prepend('&amp;quot;expert_grades&amp;quot;: {&amp;quot;submission25030&amp;quot;:95,&amp;quot;submission25031&amp;quot;:92,&amp;quot;submission25033&amp;quot;:88,&amp;quot;submission25034&amp;quot;:98,&amp;quot;submission25035&amp;quot;:100,&amp;quot;submission25037&amp;quot;:95,&amp;quot;submission25038&amp;quot;:95,&amp;quot;submission25039&amp;quot;:93,&amp;quot;submission25040&amp;quot;:96,&amp;quot;submission25041&amp;quot;:90,&amp;quot;submission25042&amp;quot;:100,&amp;quot;submission25046&amp;quot;:95,&amp;quot;submission25049&amp;quot;:90,&amp;quot;submission25050&amp;quot;:88,&amp;quot;submission25053&amp;quot;:91,&amp;quot;submission25054&amp;quot;:96,&amp;quot;submission25055&amp;quot;:94,&amp;quot;submission25059&amp;quot;:96,&amp;quot;submission25071&amp;quot;:85,&amp;quot;submission25082&amp;quot;:100,&amp;quot;submission25086&amp;quot;:95,&amp;quot;submission25097&amp;quot;:90,&amp;quot;submission25098&amp;quot;:85,&amp;quot;submission25102&amp;quot;:97,&amp;quot;submission25103&amp;quot;:94,&amp;quot;submission25105&amp;quot;:98,&amp;quot;submission25114&amp;quot;:95,&amp;quot;submission25115&amp;quot;:94},')&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_quiz_scores&lt;br /&gt;
  # It gets the assignment id list and generates the json on quiz scores of&lt;br /&gt;
  # those assignments.&lt;br /&gt;
  # Finally processes quiz string is prepended to the request body, received&lt;br /&gt;
  # as an argument, and returns the body to prepare_request_body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_quiz_scores(body)&lt;br /&gt;
    flash[:additional_info] = 'add quiz scores'&lt;br /&gt;
    assignment_id_list_quiz = get_assignment_id_list(params[:assignment_id].to_i, params[:another_assignment_id].to_i)&lt;br /&gt;
    quiz_str =  generate_json_for_quiz_scores(assignment_id_list_quiz).to_json&lt;br /&gt;
    quiz_str[0] = '' # remove first {&lt;br /&gt;
    quiz_str.prepend('&amp;quot;quiz_scores&amp;quot;:{') # add quiz_scores tag&lt;br /&gt;
    quiz_str += ','&lt;br /&gt;
    quiz_str = quiz_str.gsub('&amp;quot;N/A&amp;quot;', '20.0') # replace N/A values with 20&lt;br /&gt;
    body.prepend(quiz_str)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_hamer_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial hamer reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_lauw_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial lauw reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_assignment_id_list&lt;br /&gt;
  # This method on receipt of individual assignment IDs returns a list with all&lt;br /&gt;
  # the assignment IDs appended into a data structure&lt;br /&gt;
  # This function accepts 2 arguments, with the second argument being optional,&lt;br /&gt;
  # and returns the list assignment_id_list&lt;br /&gt;
  # If the second argument is 0, it is not appended to the list.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_one: first assignment id (required)&lt;br /&gt;
  #   assignment_id_two: second assignment id (optional)&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   assignment_id_list: list containing two assignment ids&lt;br /&gt;
  def get_assignment_id_list(assignment_id_one, assignment_id_two = 0)&lt;br /&gt;
    assignment_id_list = []&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_one&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_two unless assignment_id_two.zero?&lt;br /&gt;
    assignment_id_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_flash_messages&lt;br /&gt;
  # This method sets the flash messages to pass on to the next request i.e&lt;br /&gt;
  # the request redirected to the client&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_flash_messages(post_req)&lt;br /&gt;
    flash[:assignment_id] = params[:assignment_id]&lt;br /&gt;
    flash[:round_num] = params[:round_num]&lt;br /&gt;
    flash[:algorithm] = params[:algorithm]&lt;br /&gt;
    flash[:another_assignment_id] = params[:another_assignment_id]&lt;br /&gt;
    flash[:request_body] = post_req.body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_additional_info_details&lt;br /&gt;
  # This method sets the additional info details based on the options&lt;br /&gt;
  # selected in the additional information section. We populate the request&lt;br /&gt;
  # based on the selections&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_additional_info_details(post_req)&lt;br /&gt;
    if params[:checkbox][:expert_grade] == 'Add expert grades'&lt;br /&gt;
      add_expert_grades(post_req.body)&lt;br /&gt;
    elsif params[:checkbox][:hamer] == 'Add initial Hamer reputation values'&lt;br /&gt;
      add_hamer_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:lauw] == 'Add initial Lauw reputation values'&lt;br /&gt;
      add_lauw_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:quiz] == 'Add quiz scores'&lt;br /&gt;
      add_quiz_scores(post_req.body)&lt;br /&gt;
    else&lt;br /&gt;
      flash[:additional_info] = ''&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: prepare_request_body&lt;br /&gt;
  # This method is responsible for preparing the request body in a proper format&lt;br /&gt;
  # to send to the server. It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # It finally sends the prepared request body back to the send_post_request&lt;br /&gt;
  # method.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def prepare_request_body&lt;br /&gt;
    reputation_web_service_path = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).path&lt;br /&gt;
    post_req = Net::HTTP::Post.new(reputation_web_service_path, { 'Content-Type' =&amp;gt; 'application/json', 'charset' =&amp;gt; 'utf-8' })&lt;br /&gt;
    curr_assignment_id = (params[:assignment_id].empty? ? '754' : params[:assignment_id])&lt;br /&gt;
    assignment_id_list_peers = get_assignment_id_list(curr_assignment_id, params[:another_assignment_id].to_i)&lt;br /&gt;
&lt;br /&gt;
    post_req.body = generate_json_for_peer_reviews(assignment_id_list_peers, params[:round_num].to_i).to_json&lt;br /&gt;
&lt;br /&gt;
    post_req.body[0] = '' # remove the first '{'&lt;br /&gt;
    add_additional_info_details post_req&lt;br /&gt;
    post_req.body.prepend('{')&lt;br /&gt;
    add_flash_messages post_req&lt;br /&gt;
    post_req&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: send_post_request&lt;br /&gt;
  # This method calls the prepare_request_body function to get a prepared&lt;br /&gt;
  # request body in proper format to send to the server.&lt;br /&gt;
  # It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # We redirect to the client url to display the results.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def send_post_request&lt;br /&gt;
    post_req = prepare_request_body&lt;br /&gt;
    reputation_web_service_hostname = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).host&lt;br /&gt;
    reputation_response = Net::HTTP.new(reputation_web_service_hostname).start { |http| http.request(post_req) }&lt;br /&gt;
    if %w[400 500].include?(reputation_response.code)&lt;br /&gt;
      flash[:error] = 'Post Request Failed'&lt;br /&gt;
    else&lt;br /&gt;
      process_response_body(reputation_response)&lt;br /&gt;
    end&lt;br /&gt;
    redirect_to action: 'client'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def calculate_reputation_Score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 4: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
As we can see the results are skewed towards lower values. This is because we added the inclusion of nil values and assumed them to be zero. Because of this assumption, the scores are skewed towards lower values. It must be considered median/random to have lesser skew.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
Upon testing the original reputation web server's Hamer values, we found discrepancies between the expected values and the values returned by the web service. This led us to conclude that the webservice implementation was incorrect.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [https://drive.google.com/file/d/1gZ5iDgqMW3COOT9Uw-_yhJlwJLZDxz-s/view?usp=sharing]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154655</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154655"/>
		<updated>2024-03-25T04:14:57Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* GitHub Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Verify the correctness of the reputation web server's Hamer values ==&lt;br /&gt;
&lt;br /&gt;
We test the original reputation web server's algorithm with our scenarios and verify if they match the expected values. The peerlogic server can be accessed via API calls to URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
It uses two algorithms: The hamer-peer algorithm and the Lauw-peer algorithm. Our scope for this project is to test Hamer values as it is already established in previous work that the Hamer algorithm suits our use case better.&lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe &amp;quot;Expertiza&amp;quot; do&lt;br /&gt;
    it &amp;quot;should return the correct Hamer calculation&amp;quot; do&lt;br /&gt;
        uri = URI('http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms')&lt;br /&gt;
    &lt;br /&gt;
        response = Net::HTTP.post(uri, INPUTS, 'Content-Type' =&amp;gt; 'application/json')&lt;br /&gt;
    &lt;br /&gt;
        expect(JSON.parse(response.body)[&amp;quot;Hamer&amp;quot;]).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results????? ===&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
We can see here that the values returned by this webservice don't match the expected values. Hence, we conclude that the webservice is not implemented correctly.&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As seen above, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'json'&lt;br /&gt;
require 'uri'&lt;br /&gt;
require 'net/http'&lt;br /&gt;
require 'openssl'&lt;br /&gt;
require 'base64'&lt;br /&gt;
&lt;br /&gt;
# Expertiza allows student work to be peer-reviewed, since peers can provide&lt;br /&gt;
# more feedback than the instructor can.&lt;br /&gt;
# However, if we want to assure that all students receive competent feedback,&lt;br /&gt;
# or even use peer-assigned grades,&lt;br /&gt;
# we need a way to judge which peer reviewers are most credible. The solution&lt;br /&gt;
# is the reputation system.&lt;br /&gt;
# Reputation systems have been deployed as web services, peer-review&lt;br /&gt;
# researchers will be able to use them to calculate scores on assignments,&lt;br /&gt;
# both past and present (past data can be used to tune the algorithms).&lt;br /&gt;
#&lt;br /&gt;
# This file is the controller to calculate the reputation scores.&lt;br /&gt;
# A 'reputation' measures how close a reviewer's scores are to other reviewers'&lt;br /&gt;
# scores.&lt;br /&gt;
# This controller implements the calculation of reputation scores.&lt;br /&gt;
class ReputationWebServiceController &amp;lt; ApplicationController&lt;br /&gt;
  include AuthorizationHelper&lt;br /&gt;
&lt;br /&gt;
  # Method: action_allowed&lt;br /&gt;
  # This method checks if the currently authenticated user has the authorization&lt;br /&gt;
  # to perform certain actions&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   true if the user has privileges to perform the action else returns false&lt;br /&gt;
  def action_allowed?&lt;br /&gt;
    current_user_has_ta_privileges?&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_max_question_score&lt;br /&gt;
  # This method receives a set of answers and gets the maximum question score&lt;br /&gt;
  # Params&lt;br /&gt;
  #   answers: set of answers&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   if no error returns max_question_score of first question else 1&lt;br /&gt;
  def get_max_question_score(answers)&lt;br /&gt;
    begin&lt;br /&gt;
      answers.first.question.questionnaire.max_question_score&lt;br /&gt;
    rescue StandardError&lt;br /&gt;
      1&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_valid_answers_for_response&lt;br /&gt;
  # This method receives response and filters the valid answers list of the&lt;br /&gt;
  # response ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   set of valid answers (returns nil if empty)&lt;br /&gt;
  def get_valid_answers_for_response(response)&lt;br /&gt;
    answers = Answer.where(response_id: response.id)&lt;br /&gt;
    valid_answer = answers.select { |answer| (answer.question.type == 'Criterion') &amp;amp;&amp;amp; !answer.answer.nil? }&lt;br /&gt;
    valid_answer.empty? ? nil : valid_answer&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: calculate_peer_review_grade&lt;br /&gt;
  # This method calculates a cumulative review grade with respect to the set of valid answers&lt;br /&gt;
  # Params&lt;br /&gt;
  #   valid_answer: valid answer to get weight of the answer's question&lt;br /&gt;
  #   max_question_score: used to calculate maximum score for peer review grade&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grade&lt;br /&gt;
  def calculate_peer_review_grade(valid_answer, max_question_score)&lt;br /&gt;
    weighted_score_sum = valid_answer.map { |answer| answer.answer * answer.question.weight }.inject(:+)&lt;br /&gt;
    question_weight_sum = valid_answer.sum { |answer| answer.question.weight }&lt;br /&gt;
    peer_review_grade = 100.0 * weighted_score_sum / (question_weight_sum * max_question_score)&lt;br /&gt;
    peer_review_grade.round(4)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews_for_responses&lt;br /&gt;
  # This method calculates the peer review grade for each valid response&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reviewer_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   team_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   valid_response: to get the valid answer for each valid response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grades_list&lt;br /&gt;
  def get_peer_reviews_for_responses(reviewer_id, team_id, valid_response)&lt;br /&gt;
    peer_review_grades_list = []&lt;br /&gt;
    valid_response.each do |response|&lt;br /&gt;
      valid_answer = get_valid_answers_for_response(response)&lt;br /&gt;
      next if valid_answer.nil?&lt;br /&gt;
&lt;br /&gt;
      review_grade = calculate_peer_review_grade(valid_answer, get_max_question_score(valid_answer))&lt;br /&gt;
      peer_review_grades_list &amp;lt;&amp;lt; [reviewer_id, team_id, review_grade]&lt;br /&gt;
    end&lt;br /&gt;
    peer_review_grades_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews&lt;br /&gt;
  # This method retrieves all the reviews for the submissions&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: used to retrieve response map&lt;br /&gt;
  #   round_num: used to retrieve round_num for the valid response&lt;br /&gt;
  #   has_topic: to get the topic condition&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which corresponds to the return of&lt;br /&gt;
  #     get_peer_reviews_for_responses method and appended to the raw_data_array&lt;br /&gt;
  def get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    ReviewResponseMap.where('reviewed_object_id in (?) and calibrate_to = ?', assignment_id_list, false).each do |response_map|&lt;br /&gt;
      reviewer = response_map.reviewer.user&lt;br /&gt;
      team = AssignmentTeam.find(response_map.reviewee_id)&lt;br /&gt;
      topic_condition = ((has_topic &amp;amp;&amp;amp; (SignedUpTeam.where(team_id: team.id).first.is_waitlisted == false)) || !has_topic)&lt;br /&gt;
      last_valid_response = response_map.response.select { |r| r.round == round_num }.max&lt;br /&gt;
      valid_response = [last_valid_response] unless last_valid_response.nil?&lt;br /&gt;
      if (topic_condition == true) &amp;amp;&amp;amp; !valid_response.nil? &amp;amp;&amp;amp; !valid_response.empty?&lt;br /&gt;
        raw_data_array += get_peer_reviews_for_responses(reviewer.id, team.id, valid_response)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_ids_list&lt;br /&gt;
  # This method maps each object to the corresponding object's ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   tables: any table&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   id in the tables&lt;br /&gt;
  def get_ids_list(tables)&lt;br /&gt;
    tables.map(&amp;amp;:id)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_scores&lt;br /&gt;
  # This method gets the quiz score of each participant for respective reviewee&lt;br /&gt;
  # Params&lt;br /&gt;
  #   team_ids: list of team IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which is a list of participant, reviewee and the participant's quiz score&lt;br /&gt;
  def get_scores(team_ids)&lt;br /&gt;
    quiz_questionnnaires = QuizQuestionnaire.where('instructor_id in (?)', team_ids)&lt;br /&gt;
    quiz_questionnnaire_ids = get_ids_list(quiz_questionnnaires)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    QuizResponseMap.where('reviewed_object_id in (?)', quiz_questionnnaire_ids).each do |response_map|&lt;br /&gt;
      quiz_score = response_map.quiz_score&lt;br /&gt;
      participant = Participant.find(response_map.reviewer_id)&lt;br /&gt;
      raw_data_array &amp;lt;&amp;lt; [participant.user_id, response_map.reviewee_id, quiz_score]&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_quiz_score&lt;br /&gt;
  # This method gets the quiz score of assignments&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: returned by get_scores method, which is a list of participant,&lt;br /&gt;
  #     reviewee and the participant's quiz score&lt;br /&gt;
  def get_quiz_score(assignment_id_list)&lt;br /&gt;
    teams = AssignmentTeam.where('parent_id in (?)', assignment_id_list)&lt;br /&gt;
    team_ids = get_ids_list(teams)&lt;br /&gt;
    get_scores(team_ids)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_body&lt;br /&gt;
  # This method generates json body for the peer reviews and quiz scores&lt;br /&gt;
  # Params&lt;br /&gt;
  #   results: list of grades with corresponding team/participant ID,&lt;br /&gt;
  #     reviewee ID and their score&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: returns the formatted body after sorting the hash&lt;br /&gt;
  def generate_json_body(results)&lt;br /&gt;
    request_body = {}&lt;br /&gt;
    results.each_with_index do |record, _index|&lt;br /&gt;
      request_body['submission' + record[1].to_s] = {} unless request_body.key?('submission' + record[1].to_s)&lt;br /&gt;
      request_body['submission' + record[1].to_s]['stu' + record[0].to_s] = record[2]&lt;br /&gt;
    end&lt;br /&gt;
    # sort the 2-dimension hash&lt;br /&gt;
    request_body.each { |k, v| request_body[k] = v.sort.to_h }&lt;br /&gt;
    request_body.sort.to_h&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_peer_reviews&lt;br /&gt;
  # This method retrieves all the peer reviews associated with&lt;br /&gt;
  # the assignment id list by calling the get_peer_reviews method.&lt;br /&gt;
  # It then formats the peer-review list in JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  #   round_num: round number of the review&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with the formatted peer review data.&lt;br /&gt;
  def generate_json_for_peer_reviews(assignment_id_list, round_num = 2)&lt;br /&gt;
    has_topic = !SignUpTopic.where(assignment_id: assignment_id_list[0]).empty?&lt;br /&gt;
&lt;br /&gt;
    peer_reviews_list = get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    request_body = generate_json_body(peer_reviews_list)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_quiz_scores&lt;br /&gt;
  # This method accepts a list of assignment ids as an argument.&lt;br /&gt;
  # It then calls the get_quiz_score method on the list to get&lt;br /&gt;
  # maps of teams and scores for the given assignments.&lt;br /&gt;
  # The map is then formatted into JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with quiz scores&lt;br /&gt;
  def generate_json_for_quiz_scores(assignment_id_list)&lt;br /&gt;
    participant_reviewee_map = get_quiz_score(assignment_id_list)&lt;br /&gt;
    request_body = generate_json_body(participant_reviewee_map)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: client&lt;br /&gt;
  # This method is called when the url reputation_web_service/client&lt;br /&gt;
  # is hit using GET method.&lt;br /&gt;
  # This renders the client.html.erb&lt;br /&gt;
  # It also populates the instance variables to be used in the views&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def client&lt;br /&gt;
    @max_assignment_id = Assignment.last.id&lt;br /&gt;
    @assignment = Assignment.find(flash[:assignment_id]) rescue nil&lt;br /&gt;
    @another_assignment = Assignment.find(flash[:another_assignment_id]) rescue nil&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: update_participants_reputation&lt;br /&gt;
  # This method accepts the response body in the JSON format.&lt;br /&gt;
  # It then parses the JSON and updates the reputation scores of the&lt;br /&gt;
  # participants in the list.&lt;br /&gt;
  # If the alg variable is not  Hamer/ Lauv, the updation step is skipped.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def update_participants_reputation(reputation_response)&lt;br /&gt;
    JSON.parse(reputation_response.body.to_s).each do |reputation_algorithm, user_resputation_list|&lt;br /&gt;
      next unless %w[Hamer Lauw].include?(reputation_algorithm)&lt;br /&gt;
&lt;br /&gt;
      user_resputation_list.each do |user_id, reputation|&lt;br /&gt;
        Participant.find_by(user_id: user_id).update(reputation_algorithm.to_sym =&amp;gt; reputation) unless /leniency/ =~ user_id.to_s&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: process_response_body&lt;br /&gt;
  # This method gets the control after receiving a response from the server.&lt;br /&gt;
  # It receives the response body as an argument&lt;br /&gt;
  # It updates the instance variables related to the response.&lt;br /&gt;
  # It then calls the update_participants_reputation to update the reputation&lt;br /&gt;
  # scores received in the response body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def process_response_body(reputation_response)&lt;br /&gt;
    flash[:response] = reputation_response&lt;br /&gt;
    flash[:response_body] = reputation_response.body&lt;br /&gt;
    update_participants_reputation(reputation_response)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_expert_grades&lt;br /&gt;
  # It prepends the request body with the expert grades pertaining&lt;br /&gt;
  # to the default wiki contribution case of 754.&lt;br /&gt;
  # It receives the request body as an argument and prepends it&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_expert_grades(body)&lt;br /&gt;
    flash[:additional_info] = 'add expert grades'&lt;br /&gt;
    case params[:assignment_id]&lt;br /&gt;
    when '754' # expert grades of Wiki contribution (754)&lt;br /&gt;
      body.prepend('&amp;quot;expert_grades&amp;quot;: {&amp;quot;submission25030&amp;quot;:95,&amp;quot;submission25031&amp;quot;:92,&amp;quot;submission25033&amp;quot;:88,&amp;quot;submission25034&amp;quot;:98,&amp;quot;submission25035&amp;quot;:100,&amp;quot;submission25037&amp;quot;:95,&amp;quot;submission25038&amp;quot;:95,&amp;quot;submission25039&amp;quot;:93,&amp;quot;submission25040&amp;quot;:96,&amp;quot;submission25041&amp;quot;:90,&amp;quot;submission25042&amp;quot;:100,&amp;quot;submission25046&amp;quot;:95,&amp;quot;submission25049&amp;quot;:90,&amp;quot;submission25050&amp;quot;:88,&amp;quot;submission25053&amp;quot;:91,&amp;quot;submission25054&amp;quot;:96,&amp;quot;submission25055&amp;quot;:94,&amp;quot;submission25059&amp;quot;:96,&amp;quot;submission25071&amp;quot;:85,&amp;quot;submission25082&amp;quot;:100,&amp;quot;submission25086&amp;quot;:95,&amp;quot;submission25097&amp;quot;:90,&amp;quot;submission25098&amp;quot;:85,&amp;quot;submission25102&amp;quot;:97,&amp;quot;submission25103&amp;quot;:94,&amp;quot;submission25105&amp;quot;:98,&amp;quot;submission25114&amp;quot;:95,&amp;quot;submission25115&amp;quot;:94},')&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_quiz_scores&lt;br /&gt;
  # It gets the assignment id list and generates the json on quiz scores of&lt;br /&gt;
  # those assignments.&lt;br /&gt;
  # Finally processes quiz string is prepended to the request body, received&lt;br /&gt;
  # as an argument, and returns the body to prepare_request_body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_quiz_scores(body)&lt;br /&gt;
    flash[:additional_info] = 'add quiz scores'&lt;br /&gt;
    assignment_id_list_quiz = get_assignment_id_list(params[:assignment_id].to_i, params[:another_assignment_id].to_i)&lt;br /&gt;
    quiz_str =  generate_json_for_quiz_scores(assignment_id_list_quiz).to_json&lt;br /&gt;
    quiz_str[0] = '' # remove first {&lt;br /&gt;
    quiz_str.prepend('&amp;quot;quiz_scores&amp;quot;:{') # add quiz_scores tag&lt;br /&gt;
    quiz_str += ','&lt;br /&gt;
    quiz_str = quiz_str.gsub('&amp;quot;N/A&amp;quot;', '20.0') # replace N/A values with 20&lt;br /&gt;
    body.prepend(quiz_str)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_hamer_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial hamer reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_lauw_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial lauw reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_assignment_id_list&lt;br /&gt;
  # This method on receipt of individual assignment IDs returns a list with all&lt;br /&gt;
  # the assignment IDs appended into a data structure&lt;br /&gt;
  # This function accepts 2 arguments, with the second argument being optional,&lt;br /&gt;
  # and returns the list assignment_id_list&lt;br /&gt;
  # If the second argument is 0, it is not appended to the list.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_one: first assignment id (required)&lt;br /&gt;
  #   assignment_id_two: second assignment id (optional)&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   assignment_id_list: list containing two assignment ids&lt;br /&gt;
  def get_assignment_id_list(assignment_id_one, assignment_id_two = 0)&lt;br /&gt;
    assignment_id_list = []&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_one&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_two unless assignment_id_two.zero?&lt;br /&gt;
    assignment_id_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_flash_messages&lt;br /&gt;
  # This method sets the flash messages to pass on to the next request i.e&lt;br /&gt;
  # the request redirected to the client&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_flash_messages(post_req)&lt;br /&gt;
    flash[:assignment_id] = params[:assignment_id]&lt;br /&gt;
    flash[:round_num] = params[:round_num]&lt;br /&gt;
    flash[:algorithm] = params[:algorithm]&lt;br /&gt;
    flash[:another_assignment_id] = params[:another_assignment_id]&lt;br /&gt;
    flash[:request_body] = post_req.body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_additional_info_details&lt;br /&gt;
  # This method sets the additional info details based on the options&lt;br /&gt;
  # selected in the additional information section. We populate the request&lt;br /&gt;
  # based on the selections&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_additional_info_details(post_req)&lt;br /&gt;
    if params[:checkbox][:expert_grade] == 'Add expert grades'&lt;br /&gt;
      add_expert_grades(post_req.body)&lt;br /&gt;
    elsif params[:checkbox][:hamer] == 'Add initial Hamer reputation values'&lt;br /&gt;
      add_hamer_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:lauw] == 'Add initial Lauw reputation values'&lt;br /&gt;
      add_lauw_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:quiz] == 'Add quiz scores'&lt;br /&gt;
      add_quiz_scores(post_req.body)&lt;br /&gt;
    else&lt;br /&gt;
      flash[:additional_info] = ''&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: prepare_request_body&lt;br /&gt;
  # This method is responsible for preparing the request body in a proper format&lt;br /&gt;
  # to send to the server. It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # It finally sends the prepared request body back to the send_post_request&lt;br /&gt;
  # method.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def prepare_request_body&lt;br /&gt;
    reputation_web_service_path = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).path&lt;br /&gt;
    post_req = Net::HTTP::Post.new(reputation_web_service_path, { 'Content-Type' =&amp;gt; 'application/json', 'charset' =&amp;gt; 'utf-8' })&lt;br /&gt;
    curr_assignment_id = (params[:assignment_id].empty? ? '754' : params[:assignment_id])&lt;br /&gt;
    assignment_id_list_peers = get_assignment_id_list(curr_assignment_id, params[:another_assignment_id].to_i)&lt;br /&gt;
&lt;br /&gt;
    post_req.body = generate_json_for_peer_reviews(assignment_id_list_peers, params[:round_num].to_i).to_json&lt;br /&gt;
&lt;br /&gt;
    post_req.body[0] = '' # remove the first '{'&lt;br /&gt;
    add_additional_info_details post_req&lt;br /&gt;
    post_req.body.prepend('{')&lt;br /&gt;
    add_flash_messages post_req&lt;br /&gt;
    post_req&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: send_post_request&lt;br /&gt;
  # This method calls the prepare_request_body function to get a prepared&lt;br /&gt;
  # request body in proper format to send to the server.&lt;br /&gt;
  # It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # We redirect to the client url to display the results.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def send_post_request&lt;br /&gt;
    post_req = prepare_request_body&lt;br /&gt;
    reputation_web_service_hostname = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).host&lt;br /&gt;
    reputation_response = Net::HTTP.new(reputation_web_service_hostname).start { |http| http.request(post_req) }&lt;br /&gt;
    if %w[400 500].include?(reputation_response.code)&lt;br /&gt;
      flash[:error] = 'Post Request Failed'&lt;br /&gt;
    else&lt;br /&gt;
      process_response_body(reputation_response)&lt;br /&gt;
    end&lt;br /&gt;
    redirect_to action: 'client'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def calculate_reputation_Score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 4: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
As we can see the results are skewed towards lower values. This is because we added the inclusion of nil values and assumed them to be zero. Because of this assumption, the scores are skewed towards lower values. It must be considered median/random to have lesser skew.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
Upon testing the original reputation web server's Hamer values, we found discrepancies between the expected values and the values returned by the web service. This led us to conclude that the webservice implementation was incorrect.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [https://github.com/expertiza/expertiza/pull/2778 here]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [????]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154654</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154654"/>
		<updated>2024-03-25T04:10:24Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Conclusion ????? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Verify the correctness of the reputation web server's Hamer values ==&lt;br /&gt;
&lt;br /&gt;
We test the original reputation web server's algorithm with our scenarios and verify if they match the expected values. The peerlogic server can be accessed via API calls to URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
It uses two algorithms: The hamer-peer algorithm and the Lauw-peer algorithm. Our scope for this project is to test Hamer values as it is already established in previous work that the Hamer algorithm suits our use case better.&lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe &amp;quot;Expertiza&amp;quot; do&lt;br /&gt;
    it &amp;quot;should return the correct Hamer calculation&amp;quot; do&lt;br /&gt;
        uri = URI('http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms')&lt;br /&gt;
    &lt;br /&gt;
        response = Net::HTTP.post(uri, INPUTS, 'Content-Type' =&amp;gt; 'application/json')&lt;br /&gt;
    &lt;br /&gt;
        expect(JSON.parse(response.body)[&amp;quot;Hamer&amp;quot;]).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results????? ===&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
We can see here that the values returned by this webservice don't match the expected values. Hence, we conclude that the webservice is not implemented correctly.&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As seen above, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'json'&lt;br /&gt;
require 'uri'&lt;br /&gt;
require 'net/http'&lt;br /&gt;
require 'openssl'&lt;br /&gt;
require 'base64'&lt;br /&gt;
&lt;br /&gt;
# Expertiza allows student work to be peer-reviewed, since peers can provide&lt;br /&gt;
# more feedback than the instructor can.&lt;br /&gt;
# However, if we want to assure that all students receive competent feedback,&lt;br /&gt;
# or even use peer-assigned grades,&lt;br /&gt;
# we need a way to judge which peer reviewers are most credible. The solution&lt;br /&gt;
# is the reputation system.&lt;br /&gt;
# Reputation systems have been deployed as web services, peer-review&lt;br /&gt;
# researchers will be able to use them to calculate scores on assignments,&lt;br /&gt;
# both past and present (past data can be used to tune the algorithms).&lt;br /&gt;
#&lt;br /&gt;
# This file is the controller to calculate the reputation scores.&lt;br /&gt;
# A 'reputation' measures how close a reviewer's scores are to other reviewers'&lt;br /&gt;
# scores.&lt;br /&gt;
# This controller implements the calculation of reputation scores.&lt;br /&gt;
class ReputationWebServiceController &amp;lt; ApplicationController&lt;br /&gt;
  include AuthorizationHelper&lt;br /&gt;
&lt;br /&gt;
  # Method: action_allowed&lt;br /&gt;
  # This method checks if the currently authenticated user has the authorization&lt;br /&gt;
  # to perform certain actions&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   true if the user has privileges to perform the action else returns false&lt;br /&gt;
  def action_allowed?&lt;br /&gt;
    current_user_has_ta_privileges?&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_max_question_score&lt;br /&gt;
  # This method receives a set of answers and gets the maximum question score&lt;br /&gt;
  # Params&lt;br /&gt;
  #   answers: set of answers&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   if no error returns max_question_score of first question else 1&lt;br /&gt;
  def get_max_question_score(answers)&lt;br /&gt;
    begin&lt;br /&gt;
      answers.first.question.questionnaire.max_question_score&lt;br /&gt;
    rescue StandardError&lt;br /&gt;
      1&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_valid_answers_for_response&lt;br /&gt;
  # This method receives response and filters the valid answers list of the&lt;br /&gt;
  # response ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   set of valid answers (returns nil if empty)&lt;br /&gt;
  def get_valid_answers_for_response(response)&lt;br /&gt;
    answers = Answer.where(response_id: response.id)&lt;br /&gt;
    valid_answer = answers.select { |answer| (answer.question.type == 'Criterion') &amp;amp;&amp;amp; !answer.answer.nil? }&lt;br /&gt;
    valid_answer.empty? ? nil : valid_answer&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: calculate_peer_review_grade&lt;br /&gt;
  # This method calculates a cumulative review grade with respect to the set of valid answers&lt;br /&gt;
  # Params&lt;br /&gt;
  #   valid_answer: valid answer to get weight of the answer's question&lt;br /&gt;
  #   max_question_score: used to calculate maximum score for peer review grade&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grade&lt;br /&gt;
  def calculate_peer_review_grade(valid_answer, max_question_score)&lt;br /&gt;
    weighted_score_sum = valid_answer.map { |answer| answer.answer * answer.question.weight }.inject(:+)&lt;br /&gt;
    question_weight_sum = valid_answer.sum { |answer| answer.question.weight }&lt;br /&gt;
    peer_review_grade = 100.0 * weighted_score_sum / (question_weight_sum * max_question_score)&lt;br /&gt;
    peer_review_grade.round(4)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews_for_responses&lt;br /&gt;
  # This method calculates the peer review grade for each valid response&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reviewer_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   team_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   valid_response: to get the valid answer for each valid response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grades_list&lt;br /&gt;
  def get_peer_reviews_for_responses(reviewer_id, team_id, valid_response)&lt;br /&gt;
    peer_review_grades_list = []&lt;br /&gt;
    valid_response.each do |response|&lt;br /&gt;
      valid_answer = get_valid_answers_for_response(response)&lt;br /&gt;
      next if valid_answer.nil?&lt;br /&gt;
&lt;br /&gt;
      review_grade = calculate_peer_review_grade(valid_answer, get_max_question_score(valid_answer))&lt;br /&gt;
      peer_review_grades_list &amp;lt;&amp;lt; [reviewer_id, team_id, review_grade]&lt;br /&gt;
    end&lt;br /&gt;
    peer_review_grades_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews&lt;br /&gt;
  # This method retrieves all the reviews for the submissions&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: used to retrieve response map&lt;br /&gt;
  #   round_num: used to retrieve round_num for the valid response&lt;br /&gt;
  #   has_topic: to get the topic condition&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which corresponds to the return of&lt;br /&gt;
  #     get_peer_reviews_for_responses method and appended to the raw_data_array&lt;br /&gt;
  def get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    ReviewResponseMap.where('reviewed_object_id in (?) and calibrate_to = ?', assignment_id_list, false).each do |response_map|&lt;br /&gt;
      reviewer = response_map.reviewer.user&lt;br /&gt;
      team = AssignmentTeam.find(response_map.reviewee_id)&lt;br /&gt;
      topic_condition = ((has_topic &amp;amp;&amp;amp; (SignedUpTeam.where(team_id: team.id).first.is_waitlisted == false)) || !has_topic)&lt;br /&gt;
      last_valid_response = response_map.response.select { |r| r.round == round_num }.max&lt;br /&gt;
      valid_response = [last_valid_response] unless last_valid_response.nil?&lt;br /&gt;
      if (topic_condition == true) &amp;amp;&amp;amp; !valid_response.nil? &amp;amp;&amp;amp; !valid_response.empty?&lt;br /&gt;
        raw_data_array += get_peer_reviews_for_responses(reviewer.id, team.id, valid_response)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_ids_list&lt;br /&gt;
  # This method maps each object to the corresponding object's ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   tables: any table&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   id in the tables&lt;br /&gt;
  def get_ids_list(tables)&lt;br /&gt;
    tables.map(&amp;amp;:id)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_scores&lt;br /&gt;
  # This method gets the quiz score of each participant for respective reviewee&lt;br /&gt;
  # Params&lt;br /&gt;
  #   team_ids: list of team IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which is a list of participant, reviewee and the participant's quiz score&lt;br /&gt;
  def get_scores(team_ids)&lt;br /&gt;
    quiz_questionnnaires = QuizQuestionnaire.where('instructor_id in (?)', team_ids)&lt;br /&gt;
    quiz_questionnnaire_ids = get_ids_list(quiz_questionnnaires)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    QuizResponseMap.where('reviewed_object_id in (?)', quiz_questionnnaire_ids).each do |response_map|&lt;br /&gt;
      quiz_score = response_map.quiz_score&lt;br /&gt;
      participant = Participant.find(response_map.reviewer_id)&lt;br /&gt;
      raw_data_array &amp;lt;&amp;lt; [participant.user_id, response_map.reviewee_id, quiz_score]&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_quiz_score&lt;br /&gt;
  # This method gets the quiz score of assignments&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: returned by get_scores method, which is a list of participant,&lt;br /&gt;
  #     reviewee and the participant's quiz score&lt;br /&gt;
  def get_quiz_score(assignment_id_list)&lt;br /&gt;
    teams = AssignmentTeam.where('parent_id in (?)', assignment_id_list)&lt;br /&gt;
    team_ids = get_ids_list(teams)&lt;br /&gt;
    get_scores(team_ids)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_body&lt;br /&gt;
  # This method generates json body for the peer reviews and quiz scores&lt;br /&gt;
  # Params&lt;br /&gt;
  #   results: list of grades with corresponding team/participant ID,&lt;br /&gt;
  #     reviewee ID and their score&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: returns the formatted body after sorting the hash&lt;br /&gt;
  def generate_json_body(results)&lt;br /&gt;
    request_body = {}&lt;br /&gt;
    results.each_with_index do |record, _index|&lt;br /&gt;
      request_body['submission' + record[1].to_s] = {} unless request_body.key?('submission' + record[1].to_s)&lt;br /&gt;
      request_body['submission' + record[1].to_s]['stu' + record[0].to_s] = record[2]&lt;br /&gt;
    end&lt;br /&gt;
    # sort the 2-dimension hash&lt;br /&gt;
    request_body.each { |k, v| request_body[k] = v.sort.to_h }&lt;br /&gt;
    request_body.sort.to_h&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_peer_reviews&lt;br /&gt;
  # This method retrieves all the peer reviews associated with&lt;br /&gt;
  # the assignment id list by calling the get_peer_reviews method.&lt;br /&gt;
  # It then formats the peer-review list in JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  #   round_num: round number of the review&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with the formatted peer review data.&lt;br /&gt;
  def generate_json_for_peer_reviews(assignment_id_list, round_num = 2)&lt;br /&gt;
    has_topic = !SignUpTopic.where(assignment_id: assignment_id_list[0]).empty?&lt;br /&gt;
&lt;br /&gt;
    peer_reviews_list = get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    request_body = generate_json_body(peer_reviews_list)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_quiz_scores&lt;br /&gt;
  # This method accepts a list of assignment ids as an argument.&lt;br /&gt;
  # It then calls the get_quiz_score method on the list to get&lt;br /&gt;
  # maps of teams and scores for the given assignments.&lt;br /&gt;
  # The map is then formatted into JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with quiz scores&lt;br /&gt;
  def generate_json_for_quiz_scores(assignment_id_list)&lt;br /&gt;
    participant_reviewee_map = get_quiz_score(assignment_id_list)&lt;br /&gt;
    request_body = generate_json_body(participant_reviewee_map)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: client&lt;br /&gt;
  # This method is called when the url reputation_web_service/client&lt;br /&gt;
  # is hit using GET method.&lt;br /&gt;
  # This renders the client.html.erb&lt;br /&gt;
  # It also populates the instance variables to be used in the views&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def client&lt;br /&gt;
    @max_assignment_id = Assignment.last.id&lt;br /&gt;
    @assignment = Assignment.find(flash[:assignment_id]) rescue nil&lt;br /&gt;
    @another_assignment = Assignment.find(flash[:another_assignment_id]) rescue nil&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: update_participants_reputation&lt;br /&gt;
  # This method accepts the response body in the JSON format.&lt;br /&gt;
  # It then parses the JSON and updates the reputation scores of the&lt;br /&gt;
  # participants in the list.&lt;br /&gt;
  # If the alg variable is not  Hamer/ Lauv, the updation step is skipped.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def update_participants_reputation(reputation_response)&lt;br /&gt;
    JSON.parse(reputation_response.body.to_s).each do |reputation_algorithm, user_resputation_list|&lt;br /&gt;
      next unless %w[Hamer Lauw].include?(reputation_algorithm)&lt;br /&gt;
&lt;br /&gt;
      user_resputation_list.each do |user_id, reputation|&lt;br /&gt;
        Participant.find_by(user_id: user_id).update(reputation_algorithm.to_sym =&amp;gt; reputation) unless /leniency/ =~ user_id.to_s&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: process_response_body&lt;br /&gt;
  # This method gets the control after receiving a response from the server.&lt;br /&gt;
  # It receives the response body as an argument&lt;br /&gt;
  # It updates the instance variables related to the response.&lt;br /&gt;
  # It then calls the update_participants_reputation to update the reputation&lt;br /&gt;
  # scores received in the response body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def process_response_body(reputation_response)&lt;br /&gt;
    flash[:response] = reputation_response&lt;br /&gt;
    flash[:response_body] = reputation_response.body&lt;br /&gt;
    update_participants_reputation(reputation_response)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_expert_grades&lt;br /&gt;
  # It prepends the request body with the expert grades pertaining&lt;br /&gt;
  # to the default wiki contribution case of 754.&lt;br /&gt;
  # It receives the request body as an argument and prepends it&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_expert_grades(body)&lt;br /&gt;
    flash[:additional_info] = 'add expert grades'&lt;br /&gt;
    case params[:assignment_id]&lt;br /&gt;
    when '754' # expert grades of Wiki contribution (754)&lt;br /&gt;
      body.prepend('&amp;quot;expert_grades&amp;quot;: {&amp;quot;submission25030&amp;quot;:95,&amp;quot;submission25031&amp;quot;:92,&amp;quot;submission25033&amp;quot;:88,&amp;quot;submission25034&amp;quot;:98,&amp;quot;submission25035&amp;quot;:100,&amp;quot;submission25037&amp;quot;:95,&amp;quot;submission25038&amp;quot;:95,&amp;quot;submission25039&amp;quot;:93,&amp;quot;submission25040&amp;quot;:96,&amp;quot;submission25041&amp;quot;:90,&amp;quot;submission25042&amp;quot;:100,&amp;quot;submission25046&amp;quot;:95,&amp;quot;submission25049&amp;quot;:90,&amp;quot;submission25050&amp;quot;:88,&amp;quot;submission25053&amp;quot;:91,&amp;quot;submission25054&amp;quot;:96,&amp;quot;submission25055&amp;quot;:94,&amp;quot;submission25059&amp;quot;:96,&amp;quot;submission25071&amp;quot;:85,&amp;quot;submission25082&amp;quot;:100,&amp;quot;submission25086&amp;quot;:95,&amp;quot;submission25097&amp;quot;:90,&amp;quot;submission25098&amp;quot;:85,&amp;quot;submission25102&amp;quot;:97,&amp;quot;submission25103&amp;quot;:94,&amp;quot;submission25105&amp;quot;:98,&amp;quot;submission25114&amp;quot;:95,&amp;quot;submission25115&amp;quot;:94},')&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_quiz_scores&lt;br /&gt;
  # It gets the assignment id list and generates the json on quiz scores of&lt;br /&gt;
  # those assignments.&lt;br /&gt;
  # Finally processes quiz string is prepended to the request body, received&lt;br /&gt;
  # as an argument, and returns the body to prepare_request_body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_quiz_scores(body)&lt;br /&gt;
    flash[:additional_info] = 'add quiz scores'&lt;br /&gt;
    assignment_id_list_quiz = get_assignment_id_list(params[:assignment_id].to_i, params[:another_assignment_id].to_i)&lt;br /&gt;
    quiz_str =  generate_json_for_quiz_scores(assignment_id_list_quiz).to_json&lt;br /&gt;
    quiz_str[0] = '' # remove first {&lt;br /&gt;
    quiz_str.prepend('&amp;quot;quiz_scores&amp;quot;:{') # add quiz_scores tag&lt;br /&gt;
    quiz_str += ','&lt;br /&gt;
    quiz_str = quiz_str.gsub('&amp;quot;N/A&amp;quot;', '20.0') # replace N/A values with 20&lt;br /&gt;
    body.prepend(quiz_str)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_hamer_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial hamer reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_lauw_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial lauw reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_assignment_id_list&lt;br /&gt;
  # This method on receipt of individual assignment IDs returns a list with all&lt;br /&gt;
  # the assignment IDs appended into a data structure&lt;br /&gt;
  # This function accepts 2 arguments, with the second argument being optional,&lt;br /&gt;
  # and returns the list assignment_id_list&lt;br /&gt;
  # If the second argument is 0, it is not appended to the list.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_one: first assignment id (required)&lt;br /&gt;
  #   assignment_id_two: second assignment id (optional)&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   assignment_id_list: list containing two assignment ids&lt;br /&gt;
  def get_assignment_id_list(assignment_id_one, assignment_id_two = 0)&lt;br /&gt;
    assignment_id_list = []&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_one&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_two unless assignment_id_two.zero?&lt;br /&gt;
    assignment_id_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_flash_messages&lt;br /&gt;
  # This method sets the flash messages to pass on to the next request i.e&lt;br /&gt;
  # the request redirected to the client&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_flash_messages(post_req)&lt;br /&gt;
    flash[:assignment_id] = params[:assignment_id]&lt;br /&gt;
    flash[:round_num] = params[:round_num]&lt;br /&gt;
    flash[:algorithm] = params[:algorithm]&lt;br /&gt;
    flash[:another_assignment_id] = params[:another_assignment_id]&lt;br /&gt;
    flash[:request_body] = post_req.body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_additional_info_details&lt;br /&gt;
  # This method sets the additional info details based on the options&lt;br /&gt;
  # selected in the additional information section. We populate the request&lt;br /&gt;
  # based on the selections&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_additional_info_details(post_req)&lt;br /&gt;
    if params[:checkbox][:expert_grade] == 'Add expert grades'&lt;br /&gt;
      add_expert_grades(post_req.body)&lt;br /&gt;
    elsif params[:checkbox][:hamer] == 'Add initial Hamer reputation values'&lt;br /&gt;
      add_hamer_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:lauw] == 'Add initial Lauw reputation values'&lt;br /&gt;
      add_lauw_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:quiz] == 'Add quiz scores'&lt;br /&gt;
      add_quiz_scores(post_req.body)&lt;br /&gt;
    else&lt;br /&gt;
      flash[:additional_info] = ''&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: prepare_request_body&lt;br /&gt;
  # This method is responsible for preparing the request body in a proper format&lt;br /&gt;
  # to send to the server. It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # It finally sends the prepared request body back to the send_post_request&lt;br /&gt;
  # method.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def prepare_request_body&lt;br /&gt;
    reputation_web_service_path = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).path&lt;br /&gt;
    post_req = Net::HTTP::Post.new(reputation_web_service_path, { 'Content-Type' =&amp;gt; 'application/json', 'charset' =&amp;gt; 'utf-8' })&lt;br /&gt;
    curr_assignment_id = (params[:assignment_id].empty? ? '754' : params[:assignment_id])&lt;br /&gt;
    assignment_id_list_peers = get_assignment_id_list(curr_assignment_id, params[:another_assignment_id].to_i)&lt;br /&gt;
&lt;br /&gt;
    post_req.body = generate_json_for_peer_reviews(assignment_id_list_peers, params[:round_num].to_i).to_json&lt;br /&gt;
&lt;br /&gt;
    post_req.body[0] = '' # remove the first '{'&lt;br /&gt;
    add_additional_info_details post_req&lt;br /&gt;
    post_req.body.prepend('{')&lt;br /&gt;
    add_flash_messages post_req&lt;br /&gt;
    post_req&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: send_post_request&lt;br /&gt;
  # This method calls the prepare_request_body function to get a prepared&lt;br /&gt;
  # request body in proper format to send to the server.&lt;br /&gt;
  # It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # We redirect to the client url to display the results.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def send_post_request&lt;br /&gt;
    post_req = prepare_request_body&lt;br /&gt;
    reputation_web_service_hostname = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).host&lt;br /&gt;
    reputation_response = Net::HTTP.new(reputation_web_service_hostname).start { |http| http.request(post_req) }&lt;br /&gt;
    if %w[400 500].include?(reputation_response.code)&lt;br /&gt;
      flash[:error] = 'Post Request Failed'&lt;br /&gt;
    else&lt;br /&gt;
      process_response_body(reputation_response)&lt;br /&gt;
    end&lt;br /&gt;
    redirect_to action: 'client'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def calculate_reputation_Score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 4: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
As we can see the results are skewed towards lower values. This is because we added the inclusion of nil values and assumed them to be zero. Because of this assumption, the scores are skewed towards lower values. It must be considered median/random to have lesser skew.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this project, we aimed to test the accuracy of the Hamer algorithm used for assessing the credibility of reviewers in a peer assessment system. We began by developing code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values. These scenarios covered various review scenarios, including cases where reviewers provided extreme scores.&lt;br /&gt;
&lt;br /&gt;
Upon testing the original reputation web server's Hamer values, we found discrepancies between the expected values and the values returned by the web service. This led us to conclude that the webservice implementation was incorrect.&lt;br /&gt;
&lt;br /&gt;
As a result, we proceeded to reimplement the Hamer algorithm in Ruby, incorporating adjustments to handle nil values appropriately. Subsequently, we validated the accuracy of the newly implemented algorithm using the same testing scenarios. While the results initially showed a skew towards lower values due to our treatment of nil values, we acknowledge the need for further refinement to handle these cases more effectively.&lt;br /&gt;
&lt;br /&gt;
In conclusion, this project highlights the importance of rigorous testing and implementation adjustments in ensuring the reliability of algorithms used in peer assessment systems. Moving forward, we recommend further refinements and validations to enhance the accuracy and robustness of the Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [???/]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [????]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154653</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154653"/>
		<updated>2024-03-25T04:08:18Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Conclusion????? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Verify the correctness of the reputation web server's Hamer values ==&lt;br /&gt;
&lt;br /&gt;
We test the original reputation web server's algorithm with our scenarios and verify if they match the expected values. The peerlogic server can be accessed via API calls to URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
It uses two algorithms: The hamer-peer algorithm and the Lauw-peer algorithm. Our scope for this project is to test Hamer values as it is already established in previous work that the Hamer algorithm suits our use case better.&lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe &amp;quot;Expertiza&amp;quot; do&lt;br /&gt;
    it &amp;quot;should return the correct Hamer calculation&amp;quot; do&lt;br /&gt;
        uri = URI('http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms')&lt;br /&gt;
    &lt;br /&gt;
        response = Net::HTTP.post(uri, INPUTS, 'Content-Type' =&amp;gt; 'application/json')&lt;br /&gt;
    &lt;br /&gt;
        expect(JSON.parse(response.body)[&amp;quot;Hamer&amp;quot;]).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results????? ===&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
We can see here that the values returned by this webservice don't match the expected values. Hence, we conclude that the webservice is not implemented correctly.&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As seen above, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'json'&lt;br /&gt;
require 'uri'&lt;br /&gt;
require 'net/http'&lt;br /&gt;
require 'openssl'&lt;br /&gt;
require 'base64'&lt;br /&gt;
&lt;br /&gt;
# Expertiza allows student work to be peer-reviewed, since peers can provide&lt;br /&gt;
# more feedback than the instructor can.&lt;br /&gt;
# However, if we want to assure that all students receive competent feedback,&lt;br /&gt;
# or even use peer-assigned grades,&lt;br /&gt;
# we need a way to judge which peer reviewers are most credible. The solution&lt;br /&gt;
# is the reputation system.&lt;br /&gt;
# Reputation systems have been deployed as web services, peer-review&lt;br /&gt;
# researchers will be able to use them to calculate scores on assignments,&lt;br /&gt;
# both past and present (past data can be used to tune the algorithms).&lt;br /&gt;
#&lt;br /&gt;
# This file is the controller to calculate the reputation scores.&lt;br /&gt;
# A 'reputation' measures how close a reviewer's scores are to other reviewers'&lt;br /&gt;
# scores.&lt;br /&gt;
# This controller implements the calculation of reputation scores.&lt;br /&gt;
class ReputationWebServiceController &amp;lt; ApplicationController&lt;br /&gt;
  include AuthorizationHelper&lt;br /&gt;
&lt;br /&gt;
  # Method: action_allowed&lt;br /&gt;
  # This method checks if the currently authenticated user has the authorization&lt;br /&gt;
  # to perform certain actions&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   true if the user has privileges to perform the action else returns false&lt;br /&gt;
  def action_allowed?&lt;br /&gt;
    current_user_has_ta_privileges?&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_max_question_score&lt;br /&gt;
  # This method receives a set of answers and gets the maximum question score&lt;br /&gt;
  # Params&lt;br /&gt;
  #   answers: set of answers&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   if no error returns max_question_score of first question else 1&lt;br /&gt;
  def get_max_question_score(answers)&lt;br /&gt;
    begin&lt;br /&gt;
      answers.first.question.questionnaire.max_question_score&lt;br /&gt;
    rescue StandardError&lt;br /&gt;
      1&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_valid_answers_for_response&lt;br /&gt;
  # This method receives response and filters the valid answers list of the&lt;br /&gt;
  # response ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   set of valid answers (returns nil if empty)&lt;br /&gt;
  def get_valid_answers_for_response(response)&lt;br /&gt;
    answers = Answer.where(response_id: response.id)&lt;br /&gt;
    valid_answer = answers.select { |answer| (answer.question.type == 'Criterion') &amp;amp;&amp;amp; !answer.answer.nil? }&lt;br /&gt;
    valid_answer.empty? ? nil : valid_answer&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: calculate_peer_review_grade&lt;br /&gt;
  # This method calculates a cumulative review grade with respect to the set of valid answers&lt;br /&gt;
  # Params&lt;br /&gt;
  #   valid_answer: valid answer to get weight of the answer's question&lt;br /&gt;
  #   max_question_score: used to calculate maximum score for peer review grade&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grade&lt;br /&gt;
  def calculate_peer_review_grade(valid_answer, max_question_score)&lt;br /&gt;
    weighted_score_sum = valid_answer.map { |answer| answer.answer * answer.question.weight }.inject(:+)&lt;br /&gt;
    question_weight_sum = valid_answer.sum { |answer| answer.question.weight }&lt;br /&gt;
    peer_review_grade = 100.0 * weighted_score_sum / (question_weight_sum * max_question_score)&lt;br /&gt;
    peer_review_grade.round(4)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews_for_responses&lt;br /&gt;
  # This method calculates the peer review grade for each valid response&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reviewer_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   team_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   valid_response: to get the valid answer for each valid response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grades_list&lt;br /&gt;
  def get_peer_reviews_for_responses(reviewer_id, team_id, valid_response)&lt;br /&gt;
    peer_review_grades_list = []&lt;br /&gt;
    valid_response.each do |response|&lt;br /&gt;
      valid_answer = get_valid_answers_for_response(response)&lt;br /&gt;
      next if valid_answer.nil?&lt;br /&gt;
&lt;br /&gt;
      review_grade = calculate_peer_review_grade(valid_answer, get_max_question_score(valid_answer))&lt;br /&gt;
      peer_review_grades_list &amp;lt;&amp;lt; [reviewer_id, team_id, review_grade]&lt;br /&gt;
    end&lt;br /&gt;
    peer_review_grades_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews&lt;br /&gt;
  # This method retrieves all the reviews for the submissions&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: used to retrieve response map&lt;br /&gt;
  #   round_num: used to retrieve round_num for the valid response&lt;br /&gt;
  #   has_topic: to get the topic condition&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which corresponds to the return of&lt;br /&gt;
  #     get_peer_reviews_for_responses method and appended to the raw_data_array&lt;br /&gt;
  def get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    ReviewResponseMap.where('reviewed_object_id in (?) and calibrate_to = ?', assignment_id_list, false).each do |response_map|&lt;br /&gt;
      reviewer = response_map.reviewer.user&lt;br /&gt;
      team = AssignmentTeam.find(response_map.reviewee_id)&lt;br /&gt;
      topic_condition = ((has_topic &amp;amp;&amp;amp; (SignedUpTeam.where(team_id: team.id).first.is_waitlisted == false)) || !has_topic)&lt;br /&gt;
      last_valid_response = response_map.response.select { |r| r.round == round_num }.max&lt;br /&gt;
      valid_response = [last_valid_response] unless last_valid_response.nil?&lt;br /&gt;
      if (topic_condition == true) &amp;amp;&amp;amp; !valid_response.nil? &amp;amp;&amp;amp; !valid_response.empty?&lt;br /&gt;
        raw_data_array += get_peer_reviews_for_responses(reviewer.id, team.id, valid_response)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_ids_list&lt;br /&gt;
  # This method maps each object to the corresponding object's ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   tables: any table&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   id in the tables&lt;br /&gt;
  def get_ids_list(tables)&lt;br /&gt;
    tables.map(&amp;amp;:id)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_scores&lt;br /&gt;
  # This method gets the quiz score of each participant for respective reviewee&lt;br /&gt;
  # Params&lt;br /&gt;
  #   team_ids: list of team IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which is a list of participant, reviewee and the participant's quiz score&lt;br /&gt;
  def get_scores(team_ids)&lt;br /&gt;
    quiz_questionnnaires = QuizQuestionnaire.where('instructor_id in (?)', team_ids)&lt;br /&gt;
    quiz_questionnnaire_ids = get_ids_list(quiz_questionnnaires)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    QuizResponseMap.where('reviewed_object_id in (?)', quiz_questionnnaire_ids).each do |response_map|&lt;br /&gt;
      quiz_score = response_map.quiz_score&lt;br /&gt;
      participant = Participant.find(response_map.reviewer_id)&lt;br /&gt;
      raw_data_array &amp;lt;&amp;lt; [participant.user_id, response_map.reviewee_id, quiz_score]&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_quiz_score&lt;br /&gt;
  # This method gets the quiz score of assignments&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: returned by get_scores method, which is a list of participant,&lt;br /&gt;
  #     reviewee and the participant's quiz score&lt;br /&gt;
  def get_quiz_score(assignment_id_list)&lt;br /&gt;
    teams = AssignmentTeam.where('parent_id in (?)', assignment_id_list)&lt;br /&gt;
    team_ids = get_ids_list(teams)&lt;br /&gt;
    get_scores(team_ids)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_body&lt;br /&gt;
  # This method generates json body for the peer reviews and quiz scores&lt;br /&gt;
  # Params&lt;br /&gt;
  #   results: list of grades with corresponding team/participant ID,&lt;br /&gt;
  #     reviewee ID and their score&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: returns the formatted body after sorting the hash&lt;br /&gt;
  def generate_json_body(results)&lt;br /&gt;
    request_body = {}&lt;br /&gt;
    results.each_with_index do |record, _index|&lt;br /&gt;
      request_body['submission' + record[1].to_s] = {} unless request_body.key?('submission' + record[1].to_s)&lt;br /&gt;
      request_body['submission' + record[1].to_s]['stu' + record[0].to_s] = record[2]&lt;br /&gt;
    end&lt;br /&gt;
    # sort the 2-dimension hash&lt;br /&gt;
    request_body.each { |k, v| request_body[k] = v.sort.to_h }&lt;br /&gt;
    request_body.sort.to_h&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_peer_reviews&lt;br /&gt;
  # This method retrieves all the peer reviews associated with&lt;br /&gt;
  # the assignment id list by calling the get_peer_reviews method.&lt;br /&gt;
  # It then formats the peer-review list in JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  #   round_num: round number of the review&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with the formatted peer review data.&lt;br /&gt;
  def generate_json_for_peer_reviews(assignment_id_list, round_num = 2)&lt;br /&gt;
    has_topic = !SignUpTopic.where(assignment_id: assignment_id_list[0]).empty?&lt;br /&gt;
&lt;br /&gt;
    peer_reviews_list = get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    request_body = generate_json_body(peer_reviews_list)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_quiz_scores&lt;br /&gt;
  # This method accepts a list of assignment ids as an argument.&lt;br /&gt;
  # It then calls the get_quiz_score method on the list to get&lt;br /&gt;
  # maps of teams and scores for the given assignments.&lt;br /&gt;
  # The map is then formatted into JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with quiz scores&lt;br /&gt;
  def generate_json_for_quiz_scores(assignment_id_list)&lt;br /&gt;
    participant_reviewee_map = get_quiz_score(assignment_id_list)&lt;br /&gt;
    request_body = generate_json_body(participant_reviewee_map)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: client&lt;br /&gt;
  # This method is called when the url reputation_web_service/client&lt;br /&gt;
  # is hit using GET method.&lt;br /&gt;
  # This renders the client.html.erb&lt;br /&gt;
  # It also populates the instance variables to be used in the views&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def client&lt;br /&gt;
    @max_assignment_id = Assignment.last.id&lt;br /&gt;
    @assignment = Assignment.find(flash[:assignment_id]) rescue nil&lt;br /&gt;
    @another_assignment = Assignment.find(flash[:another_assignment_id]) rescue nil&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: update_participants_reputation&lt;br /&gt;
  # This method accepts the response body in the JSON format.&lt;br /&gt;
  # It then parses the JSON and updates the reputation scores of the&lt;br /&gt;
  # participants in the list.&lt;br /&gt;
  # If the alg variable is not  Hamer/ Lauv, the updation step is skipped.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def update_participants_reputation(reputation_response)&lt;br /&gt;
    JSON.parse(reputation_response.body.to_s).each do |reputation_algorithm, user_resputation_list|&lt;br /&gt;
      next unless %w[Hamer Lauw].include?(reputation_algorithm)&lt;br /&gt;
&lt;br /&gt;
      user_resputation_list.each do |user_id, reputation|&lt;br /&gt;
        Participant.find_by(user_id: user_id).update(reputation_algorithm.to_sym =&amp;gt; reputation) unless /leniency/ =~ user_id.to_s&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: process_response_body&lt;br /&gt;
  # This method gets the control after receiving a response from the server.&lt;br /&gt;
  # It receives the response body as an argument&lt;br /&gt;
  # It updates the instance variables related to the response.&lt;br /&gt;
  # It then calls the update_participants_reputation to update the reputation&lt;br /&gt;
  # scores received in the response body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def process_response_body(reputation_response)&lt;br /&gt;
    flash[:response] = reputation_response&lt;br /&gt;
    flash[:response_body] = reputation_response.body&lt;br /&gt;
    update_participants_reputation(reputation_response)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_expert_grades&lt;br /&gt;
  # It prepends the request body with the expert grades pertaining&lt;br /&gt;
  # to the default wiki contribution case of 754.&lt;br /&gt;
  # It receives the request body as an argument and prepends it&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_expert_grades(body)&lt;br /&gt;
    flash[:additional_info] = 'add expert grades'&lt;br /&gt;
    case params[:assignment_id]&lt;br /&gt;
    when '754' # expert grades of Wiki contribution (754)&lt;br /&gt;
      body.prepend('&amp;quot;expert_grades&amp;quot;: {&amp;quot;submission25030&amp;quot;:95,&amp;quot;submission25031&amp;quot;:92,&amp;quot;submission25033&amp;quot;:88,&amp;quot;submission25034&amp;quot;:98,&amp;quot;submission25035&amp;quot;:100,&amp;quot;submission25037&amp;quot;:95,&amp;quot;submission25038&amp;quot;:95,&amp;quot;submission25039&amp;quot;:93,&amp;quot;submission25040&amp;quot;:96,&amp;quot;submission25041&amp;quot;:90,&amp;quot;submission25042&amp;quot;:100,&amp;quot;submission25046&amp;quot;:95,&amp;quot;submission25049&amp;quot;:90,&amp;quot;submission25050&amp;quot;:88,&amp;quot;submission25053&amp;quot;:91,&amp;quot;submission25054&amp;quot;:96,&amp;quot;submission25055&amp;quot;:94,&amp;quot;submission25059&amp;quot;:96,&amp;quot;submission25071&amp;quot;:85,&amp;quot;submission25082&amp;quot;:100,&amp;quot;submission25086&amp;quot;:95,&amp;quot;submission25097&amp;quot;:90,&amp;quot;submission25098&amp;quot;:85,&amp;quot;submission25102&amp;quot;:97,&amp;quot;submission25103&amp;quot;:94,&amp;quot;submission25105&amp;quot;:98,&amp;quot;submission25114&amp;quot;:95,&amp;quot;submission25115&amp;quot;:94},')&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_quiz_scores&lt;br /&gt;
  # It gets the assignment id list and generates the json on quiz scores of&lt;br /&gt;
  # those assignments.&lt;br /&gt;
  # Finally processes quiz string is prepended to the request body, received&lt;br /&gt;
  # as an argument, and returns the body to prepare_request_body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_quiz_scores(body)&lt;br /&gt;
    flash[:additional_info] = 'add quiz scores'&lt;br /&gt;
    assignment_id_list_quiz = get_assignment_id_list(params[:assignment_id].to_i, params[:another_assignment_id].to_i)&lt;br /&gt;
    quiz_str =  generate_json_for_quiz_scores(assignment_id_list_quiz).to_json&lt;br /&gt;
    quiz_str[0] = '' # remove first {&lt;br /&gt;
    quiz_str.prepend('&amp;quot;quiz_scores&amp;quot;:{') # add quiz_scores tag&lt;br /&gt;
    quiz_str += ','&lt;br /&gt;
    quiz_str = quiz_str.gsub('&amp;quot;N/A&amp;quot;', '20.0') # replace N/A values with 20&lt;br /&gt;
    body.prepend(quiz_str)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_hamer_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial hamer reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_lauw_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial lauw reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_assignment_id_list&lt;br /&gt;
  # This method on receipt of individual assignment IDs returns a list with all&lt;br /&gt;
  # the assignment IDs appended into a data structure&lt;br /&gt;
  # This function accepts 2 arguments, with the second argument being optional,&lt;br /&gt;
  # and returns the list assignment_id_list&lt;br /&gt;
  # If the second argument is 0, it is not appended to the list.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_one: first assignment id (required)&lt;br /&gt;
  #   assignment_id_two: second assignment id (optional)&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   assignment_id_list: list containing two assignment ids&lt;br /&gt;
  def get_assignment_id_list(assignment_id_one, assignment_id_two = 0)&lt;br /&gt;
    assignment_id_list = []&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_one&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_two unless assignment_id_two.zero?&lt;br /&gt;
    assignment_id_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_flash_messages&lt;br /&gt;
  # This method sets the flash messages to pass on to the next request i.e&lt;br /&gt;
  # the request redirected to the client&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_flash_messages(post_req)&lt;br /&gt;
    flash[:assignment_id] = params[:assignment_id]&lt;br /&gt;
    flash[:round_num] = params[:round_num]&lt;br /&gt;
    flash[:algorithm] = params[:algorithm]&lt;br /&gt;
    flash[:another_assignment_id] = params[:another_assignment_id]&lt;br /&gt;
    flash[:request_body] = post_req.body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_additional_info_details&lt;br /&gt;
  # This method sets the additional info details based on the options&lt;br /&gt;
  # selected in the additional information section. We populate the request&lt;br /&gt;
  # based on the selections&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_additional_info_details(post_req)&lt;br /&gt;
    if params[:checkbox][:expert_grade] == 'Add expert grades'&lt;br /&gt;
      add_expert_grades(post_req.body)&lt;br /&gt;
    elsif params[:checkbox][:hamer] == 'Add initial Hamer reputation values'&lt;br /&gt;
      add_hamer_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:lauw] == 'Add initial Lauw reputation values'&lt;br /&gt;
      add_lauw_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:quiz] == 'Add quiz scores'&lt;br /&gt;
      add_quiz_scores(post_req.body)&lt;br /&gt;
    else&lt;br /&gt;
      flash[:additional_info] = ''&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: prepare_request_body&lt;br /&gt;
  # This method is responsible for preparing the request body in a proper format&lt;br /&gt;
  # to send to the server. It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # It finally sends the prepared request body back to the send_post_request&lt;br /&gt;
  # method.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def prepare_request_body&lt;br /&gt;
    reputation_web_service_path = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).path&lt;br /&gt;
    post_req = Net::HTTP::Post.new(reputation_web_service_path, { 'Content-Type' =&amp;gt; 'application/json', 'charset' =&amp;gt; 'utf-8' })&lt;br /&gt;
    curr_assignment_id = (params[:assignment_id].empty? ? '754' : params[:assignment_id])&lt;br /&gt;
    assignment_id_list_peers = get_assignment_id_list(curr_assignment_id, params[:another_assignment_id].to_i)&lt;br /&gt;
&lt;br /&gt;
    post_req.body = generate_json_for_peer_reviews(assignment_id_list_peers, params[:round_num].to_i).to_json&lt;br /&gt;
&lt;br /&gt;
    post_req.body[0] = '' # remove the first '{'&lt;br /&gt;
    add_additional_info_details post_req&lt;br /&gt;
    post_req.body.prepend('{')&lt;br /&gt;
    add_flash_messages post_req&lt;br /&gt;
    post_req&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: send_post_request&lt;br /&gt;
  # This method calls the prepare_request_body function to get a prepared&lt;br /&gt;
  # request body in proper format to send to the server.&lt;br /&gt;
  # It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # We redirect to the client url to display the results.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def send_post_request&lt;br /&gt;
    post_req = prepare_request_body&lt;br /&gt;
    reputation_web_service_hostname = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).host&lt;br /&gt;
    reputation_response = Net::HTTP.new(reputation_web_service_hostname).start { |http| http.request(post_req) }&lt;br /&gt;
    if %w[400 500].include?(reputation_response.code)&lt;br /&gt;
      flash[:error] = 'Post Request Failed'&lt;br /&gt;
    else&lt;br /&gt;
      process_response_body(reputation_response)&lt;br /&gt;
    end&lt;br /&gt;
    redirect_to action: 'client'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def calculate_reputation_Score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 4: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
As we can see the results are skewed towards lower values. This is because we added the inclusion of nil values and assumed them to be zero. Because of this assumption, the scores are skewed towards lower values. It must be considered median/random to have lesser skew.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ????? ==&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [???/]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [????]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154652</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154652"/>
		<updated>2024-03-25T04:06:24Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Results????? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Verify the correctness of the reputation web server's Hamer values ==&lt;br /&gt;
&lt;br /&gt;
We test the original reputation web server's algorithm with our scenarios and verify if they match the expected values. The peerlogic server can be accessed via API calls to URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
It uses two algorithms: The hamer-peer algorithm and the Lauw-peer algorithm. Our scope for this project is to test Hamer values as it is already established in previous work that the Hamer algorithm suits our use case better.&lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe &amp;quot;Expertiza&amp;quot; do&lt;br /&gt;
    it &amp;quot;should return the correct Hamer calculation&amp;quot; do&lt;br /&gt;
        uri = URI('http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms')&lt;br /&gt;
    &lt;br /&gt;
        response = Net::HTTP.post(uri, INPUTS, 'Content-Type' =&amp;gt; 'application/json')&lt;br /&gt;
    &lt;br /&gt;
        expect(JSON.parse(response.body)[&amp;quot;Hamer&amp;quot;]).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results????? ===&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
We can see here that the values returned by this webservice don't match the expected values. Hence, we conclude that the webservice is not implemented correctly.&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As seen above, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'json'&lt;br /&gt;
require 'uri'&lt;br /&gt;
require 'net/http'&lt;br /&gt;
require 'openssl'&lt;br /&gt;
require 'base64'&lt;br /&gt;
&lt;br /&gt;
# Expertiza allows student work to be peer-reviewed, since peers can provide&lt;br /&gt;
# more feedback than the instructor can.&lt;br /&gt;
# However, if we want to assure that all students receive competent feedback,&lt;br /&gt;
# or even use peer-assigned grades,&lt;br /&gt;
# we need a way to judge which peer reviewers are most credible. The solution&lt;br /&gt;
# is the reputation system.&lt;br /&gt;
# Reputation systems have been deployed as web services, peer-review&lt;br /&gt;
# researchers will be able to use them to calculate scores on assignments,&lt;br /&gt;
# both past and present (past data can be used to tune the algorithms).&lt;br /&gt;
#&lt;br /&gt;
# This file is the controller to calculate the reputation scores.&lt;br /&gt;
# A 'reputation' measures how close a reviewer's scores are to other reviewers'&lt;br /&gt;
# scores.&lt;br /&gt;
# This controller implements the calculation of reputation scores.&lt;br /&gt;
class ReputationWebServiceController &amp;lt; ApplicationController&lt;br /&gt;
  include AuthorizationHelper&lt;br /&gt;
&lt;br /&gt;
  # Method: action_allowed&lt;br /&gt;
  # This method checks if the currently authenticated user has the authorization&lt;br /&gt;
  # to perform certain actions&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   true if the user has privileges to perform the action else returns false&lt;br /&gt;
  def action_allowed?&lt;br /&gt;
    current_user_has_ta_privileges?&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_max_question_score&lt;br /&gt;
  # This method receives a set of answers and gets the maximum question score&lt;br /&gt;
  # Params&lt;br /&gt;
  #   answers: set of answers&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   if no error returns max_question_score of first question else 1&lt;br /&gt;
  def get_max_question_score(answers)&lt;br /&gt;
    begin&lt;br /&gt;
      answers.first.question.questionnaire.max_question_score&lt;br /&gt;
    rescue StandardError&lt;br /&gt;
      1&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_valid_answers_for_response&lt;br /&gt;
  # This method receives response and filters the valid answers list of the&lt;br /&gt;
  # response ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   set of valid answers (returns nil if empty)&lt;br /&gt;
  def get_valid_answers_for_response(response)&lt;br /&gt;
    answers = Answer.where(response_id: response.id)&lt;br /&gt;
    valid_answer = answers.select { |answer| (answer.question.type == 'Criterion') &amp;amp;&amp;amp; !answer.answer.nil? }&lt;br /&gt;
    valid_answer.empty? ? nil : valid_answer&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: calculate_peer_review_grade&lt;br /&gt;
  # This method calculates a cumulative review grade with respect to the set of valid answers&lt;br /&gt;
  # Params&lt;br /&gt;
  #   valid_answer: valid answer to get weight of the answer's question&lt;br /&gt;
  #   max_question_score: used to calculate maximum score for peer review grade&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grade&lt;br /&gt;
  def calculate_peer_review_grade(valid_answer, max_question_score)&lt;br /&gt;
    weighted_score_sum = valid_answer.map { |answer| answer.answer * answer.question.weight }.inject(:+)&lt;br /&gt;
    question_weight_sum = valid_answer.sum { |answer| answer.question.weight }&lt;br /&gt;
    peer_review_grade = 100.0 * weighted_score_sum / (question_weight_sum * max_question_score)&lt;br /&gt;
    peer_review_grade.round(4)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews_for_responses&lt;br /&gt;
  # This method calculates the peer review grade for each valid response&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reviewer_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   team_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   valid_response: to get the valid answer for each valid response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grades_list&lt;br /&gt;
  def get_peer_reviews_for_responses(reviewer_id, team_id, valid_response)&lt;br /&gt;
    peer_review_grades_list = []&lt;br /&gt;
    valid_response.each do |response|&lt;br /&gt;
      valid_answer = get_valid_answers_for_response(response)&lt;br /&gt;
      next if valid_answer.nil?&lt;br /&gt;
&lt;br /&gt;
      review_grade = calculate_peer_review_grade(valid_answer, get_max_question_score(valid_answer))&lt;br /&gt;
      peer_review_grades_list &amp;lt;&amp;lt; [reviewer_id, team_id, review_grade]&lt;br /&gt;
    end&lt;br /&gt;
    peer_review_grades_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews&lt;br /&gt;
  # This method retrieves all the reviews for the submissions&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: used to retrieve response map&lt;br /&gt;
  #   round_num: used to retrieve round_num for the valid response&lt;br /&gt;
  #   has_topic: to get the topic condition&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which corresponds to the return of&lt;br /&gt;
  #     get_peer_reviews_for_responses method and appended to the raw_data_array&lt;br /&gt;
  def get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    ReviewResponseMap.where('reviewed_object_id in (?) and calibrate_to = ?', assignment_id_list, false).each do |response_map|&lt;br /&gt;
      reviewer = response_map.reviewer.user&lt;br /&gt;
      team = AssignmentTeam.find(response_map.reviewee_id)&lt;br /&gt;
      topic_condition = ((has_topic &amp;amp;&amp;amp; (SignedUpTeam.where(team_id: team.id).first.is_waitlisted == false)) || !has_topic)&lt;br /&gt;
      last_valid_response = response_map.response.select { |r| r.round == round_num }.max&lt;br /&gt;
      valid_response = [last_valid_response] unless last_valid_response.nil?&lt;br /&gt;
      if (topic_condition == true) &amp;amp;&amp;amp; !valid_response.nil? &amp;amp;&amp;amp; !valid_response.empty?&lt;br /&gt;
        raw_data_array += get_peer_reviews_for_responses(reviewer.id, team.id, valid_response)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_ids_list&lt;br /&gt;
  # This method maps each object to the corresponding object's ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   tables: any table&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   id in the tables&lt;br /&gt;
  def get_ids_list(tables)&lt;br /&gt;
    tables.map(&amp;amp;:id)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_scores&lt;br /&gt;
  # This method gets the quiz score of each participant for respective reviewee&lt;br /&gt;
  # Params&lt;br /&gt;
  #   team_ids: list of team IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which is a list of participant, reviewee and the participant's quiz score&lt;br /&gt;
  def get_scores(team_ids)&lt;br /&gt;
    quiz_questionnnaires = QuizQuestionnaire.where('instructor_id in (?)', team_ids)&lt;br /&gt;
    quiz_questionnnaire_ids = get_ids_list(quiz_questionnnaires)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    QuizResponseMap.where('reviewed_object_id in (?)', quiz_questionnnaire_ids).each do |response_map|&lt;br /&gt;
      quiz_score = response_map.quiz_score&lt;br /&gt;
      participant = Participant.find(response_map.reviewer_id)&lt;br /&gt;
      raw_data_array &amp;lt;&amp;lt; [participant.user_id, response_map.reviewee_id, quiz_score]&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_quiz_score&lt;br /&gt;
  # This method gets the quiz score of assignments&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: returned by get_scores method, which is a list of participant,&lt;br /&gt;
  #     reviewee and the participant's quiz score&lt;br /&gt;
  def get_quiz_score(assignment_id_list)&lt;br /&gt;
    teams = AssignmentTeam.where('parent_id in (?)', assignment_id_list)&lt;br /&gt;
    team_ids = get_ids_list(teams)&lt;br /&gt;
    get_scores(team_ids)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_body&lt;br /&gt;
  # This method generates json body for the peer reviews and quiz scores&lt;br /&gt;
  # Params&lt;br /&gt;
  #   results: list of grades with corresponding team/participant ID,&lt;br /&gt;
  #     reviewee ID and their score&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: returns the formatted body after sorting the hash&lt;br /&gt;
  def generate_json_body(results)&lt;br /&gt;
    request_body = {}&lt;br /&gt;
    results.each_with_index do |record, _index|&lt;br /&gt;
      request_body['submission' + record[1].to_s] = {} unless request_body.key?('submission' + record[1].to_s)&lt;br /&gt;
      request_body['submission' + record[1].to_s]['stu' + record[0].to_s] = record[2]&lt;br /&gt;
    end&lt;br /&gt;
    # sort the 2-dimension hash&lt;br /&gt;
    request_body.each { |k, v| request_body[k] = v.sort.to_h }&lt;br /&gt;
    request_body.sort.to_h&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_peer_reviews&lt;br /&gt;
  # This method retrieves all the peer reviews associated with&lt;br /&gt;
  # the assignment id list by calling the get_peer_reviews method.&lt;br /&gt;
  # It then formats the peer-review list in JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  #   round_num: round number of the review&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with the formatted peer review data.&lt;br /&gt;
  def generate_json_for_peer_reviews(assignment_id_list, round_num = 2)&lt;br /&gt;
    has_topic = !SignUpTopic.where(assignment_id: assignment_id_list[0]).empty?&lt;br /&gt;
&lt;br /&gt;
    peer_reviews_list = get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    request_body = generate_json_body(peer_reviews_list)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_quiz_scores&lt;br /&gt;
  # This method accepts a list of assignment ids as an argument.&lt;br /&gt;
  # It then calls the get_quiz_score method on the list to get&lt;br /&gt;
  # maps of teams and scores for the given assignments.&lt;br /&gt;
  # The map is then formatted into JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with quiz scores&lt;br /&gt;
  def generate_json_for_quiz_scores(assignment_id_list)&lt;br /&gt;
    participant_reviewee_map = get_quiz_score(assignment_id_list)&lt;br /&gt;
    request_body = generate_json_body(participant_reviewee_map)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: client&lt;br /&gt;
  # This method is called when the url reputation_web_service/client&lt;br /&gt;
  # is hit using GET method.&lt;br /&gt;
  # This renders the client.html.erb&lt;br /&gt;
  # It also populates the instance variables to be used in the views&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def client&lt;br /&gt;
    @max_assignment_id = Assignment.last.id&lt;br /&gt;
    @assignment = Assignment.find(flash[:assignment_id]) rescue nil&lt;br /&gt;
    @another_assignment = Assignment.find(flash[:another_assignment_id]) rescue nil&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: update_participants_reputation&lt;br /&gt;
  # This method accepts the response body in the JSON format.&lt;br /&gt;
  # It then parses the JSON and updates the reputation scores of the&lt;br /&gt;
  # participants in the list.&lt;br /&gt;
  # If the alg variable is not  Hamer/ Lauv, the updation step is skipped.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def update_participants_reputation(reputation_response)&lt;br /&gt;
    JSON.parse(reputation_response.body.to_s).each do |reputation_algorithm, user_resputation_list|&lt;br /&gt;
      next unless %w[Hamer Lauw].include?(reputation_algorithm)&lt;br /&gt;
&lt;br /&gt;
      user_resputation_list.each do |user_id, reputation|&lt;br /&gt;
        Participant.find_by(user_id: user_id).update(reputation_algorithm.to_sym =&amp;gt; reputation) unless /leniency/ =~ user_id.to_s&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: process_response_body&lt;br /&gt;
  # This method gets the control after receiving a response from the server.&lt;br /&gt;
  # It receives the response body as an argument&lt;br /&gt;
  # It updates the instance variables related to the response.&lt;br /&gt;
  # It then calls the update_participants_reputation to update the reputation&lt;br /&gt;
  # scores received in the response body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def process_response_body(reputation_response)&lt;br /&gt;
    flash[:response] = reputation_response&lt;br /&gt;
    flash[:response_body] = reputation_response.body&lt;br /&gt;
    update_participants_reputation(reputation_response)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_expert_grades&lt;br /&gt;
  # It prepends the request body with the expert grades pertaining&lt;br /&gt;
  # to the default wiki contribution case of 754.&lt;br /&gt;
  # It receives the request body as an argument and prepends it&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_expert_grades(body)&lt;br /&gt;
    flash[:additional_info] = 'add expert grades'&lt;br /&gt;
    case params[:assignment_id]&lt;br /&gt;
    when '754' # expert grades of Wiki contribution (754)&lt;br /&gt;
      body.prepend('&amp;quot;expert_grades&amp;quot;: {&amp;quot;submission25030&amp;quot;:95,&amp;quot;submission25031&amp;quot;:92,&amp;quot;submission25033&amp;quot;:88,&amp;quot;submission25034&amp;quot;:98,&amp;quot;submission25035&amp;quot;:100,&amp;quot;submission25037&amp;quot;:95,&amp;quot;submission25038&amp;quot;:95,&amp;quot;submission25039&amp;quot;:93,&amp;quot;submission25040&amp;quot;:96,&amp;quot;submission25041&amp;quot;:90,&amp;quot;submission25042&amp;quot;:100,&amp;quot;submission25046&amp;quot;:95,&amp;quot;submission25049&amp;quot;:90,&amp;quot;submission25050&amp;quot;:88,&amp;quot;submission25053&amp;quot;:91,&amp;quot;submission25054&amp;quot;:96,&amp;quot;submission25055&amp;quot;:94,&amp;quot;submission25059&amp;quot;:96,&amp;quot;submission25071&amp;quot;:85,&amp;quot;submission25082&amp;quot;:100,&amp;quot;submission25086&amp;quot;:95,&amp;quot;submission25097&amp;quot;:90,&amp;quot;submission25098&amp;quot;:85,&amp;quot;submission25102&amp;quot;:97,&amp;quot;submission25103&amp;quot;:94,&amp;quot;submission25105&amp;quot;:98,&amp;quot;submission25114&amp;quot;:95,&amp;quot;submission25115&amp;quot;:94},')&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_quiz_scores&lt;br /&gt;
  # It gets the assignment id list and generates the json on quiz scores of&lt;br /&gt;
  # those assignments.&lt;br /&gt;
  # Finally processes quiz string is prepended to the request body, received&lt;br /&gt;
  # as an argument, and returns the body to prepare_request_body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_quiz_scores(body)&lt;br /&gt;
    flash[:additional_info] = 'add quiz scores'&lt;br /&gt;
    assignment_id_list_quiz = get_assignment_id_list(params[:assignment_id].to_i, params[:another_assignment_id].to_i)&lt;br /&gt;
    quiz_str =  generate_json_for_quiz_scores(assignment_id_list_quiz).to_json&lt;br /&gt;
    quiz_str[0] = '' # remove first {&lt;br /&gt;
    quiz_str.prepend('&amp;quot;quiz_scores&amp;quot;:{') # add quiz_scores tag&lt;br /&gt;
    quiz_str += ','&lt;br /&gt;
    quiz_str = quiz_str.gsub('&amp;quot;N/A&amp;quot;', '20.0') # replace N/A values with 20&lt;br /&gt;
    body.prepend(quiz_str)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_hamer_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial hamer reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_lauw_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial lauw reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_assignment_id_list&lt;br /&gt;
  # This method on receipt of individual assignment IDs returns a list with all&lt;br /&gt;
  # the assignment IDs appended into a data structure&lt;br /&gt;
  # This function accepts 2 arguments, with the second argument being optional,&lt;br /&gt;
  # and returns the list assignment_id_list&lt;br /&gt;
  # If the second argument is 0, it is not appended to the list.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_one: first assignment id (required)&lt;br /&gt;
  #   assignment_id_two: second assignment id (optional)&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   assignment_id_list: list containing two assignment ids&lt;br /&gt;
  def get_assignment_id_list(assignment_id_one, assignment_id_two = 0)&lt;br /&gt;
    assignment_id_list = []&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_one&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_two unless assignment_id_two.zero?&lt;br /&gt;
    assignment_id_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_flash_messages&lt;br /&gt;
  # This method sets the flash messages to pass on to the next request i.e&lt;br /&gt;
  # the request redirected to the client&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_flash_messages(post_req)&lt;br /&gt;
    flash[:assignment_id] = params[:assignment_id]&lt;br /&gt;
    flash[:round_num] = params[:round_num]&lt;br /&gt;
    flash[:algorithm] = params[:algorithm]&lt;br /&gt;
    flash[:another_assignment_id] = params[:another_assignment_id]&lt;br /&gt;
    flash[:request_body] = post_req.body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_additional_info_details&lt;br /&gt;
  # This method sets the additional info details based on the options&lt;br /&gt;
  # selected in the additional information section. We populate the request&lt;br /&gt;
  # based on the selections&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_additional_info_details(post_req)&lt;br /&gt;
    if params[:checkbox][:expert_grade] == 'Add expert grades'&lt;br /&gt;
      add_expert_grades(post_req.body)&lt;br /&gt;
    elsif params[:checkbox][:hamer] == 'Add initial Hamer reputation values'&lt;br /&gt;
      add_hamer_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:lauw] == 'Add initial Lauw reputation values'&lt;br /&gt;
      add_lauw_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:quiz] == 'Add quiz scores'&lt;br /&gt;
      add_quiz_scores(post_req.body)&lt;br /&gt;
    else&lt;br /&gt;
      flash[:additional_info] = ''&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: prepare_request_body&lt;br /&gt;
  # This method is responsible for preparing the request body in a proper format&lt;br /&gt;
  # to send to the server. It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # It finally sends the prepared request body back to the send_post_request&lt;br /&gt;
  # method.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def prepare_request_body&lt;br /&gt;
    reputation_web_service_path = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).path&lt;br /&gt;
    post_req = Net::HTTP::Post.new(reputation_web_service_path, { 'Content-Type' =&amp;gt; 'application/json', 'charset' =&amp;gt; 'utf-8' })&lt;br /&gt;
    curr_assignment_id = (params[:assignment_id].empty? ? '754' : params[:assignment_id])&lt;br /&gt;
    assignment_id_list_peers = get_assignment_id_list(curr_assignment_id, params[:another_assignment_id].to_i)&lt;br /&gt;
&lt;br /&gt;
    post_req.body = generate_json_for_peer_reviews(assignment_id_list_peers, params[:round_num].to_i).to_json&lt;br /&gt;
&lt;br /&gt;
    post_req.body[0] = '' # remove the first '{'&lt;br /&gt;
    add_additional_info_details post_req&lt;br /&gt;
    post_req.body.prepend('{')&lt;br /&gt;
    add_flash_messages post_req&lt;br /&gt;
    post_req&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: send_post_request&lt;br /&gt;
  # This method calls the prepare_request_body function to get a prepared&lt;br /&gt;
  # request body in proper format to send to the server.&lt;br /&gt;
  # It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # We redirect to the client url to display the results.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def send_post_request&lt;br /&gt;
    post_req = prepare_request_body&lt;br /&gt;
    reputation_web_service_hostname = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).host&lt;br /&gt;
    reputation_response = Net::HTTP.new(reputation_web_service_hostname).start { |http| http.request(post_req) }&lt;br /&gt;
    if %w[400 500].include?(reputation_response.code)&lt;br /&gt;
      flash[:error] = 'Post Request Failed'&lt;br /&gt;
    else&lt;br /&gt;
      process_response_body(reputation_response)&lt;br /&gt;
    end&lt;br /&gt;
    redirect_to action: 'client'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def calculate_reputation_Score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 4: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results===&lt;br /&gt;
&lt;br /&gt;
[[File:Results hamer new.jpeg]]&lt;br /&gt;
&lt;br /&gt;
=== Conclusion????? ===&lt;br /&gt;
&lt;br /&gt;
== Conclusion ????? ==&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [???/]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [????]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:Results_hamer_new.jpeg&amp;diff=154651</id>
		<title>File:Results hamer new.jpeg</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:Results_hamer_new.jpeg&amp;diff=154651"/>
		<updated>2024-03-25T04:05:34Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154643</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154643"/>
		<updated>2024-03-25T03:58:01Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Changes made in implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Verify the correctness of the reputation web server's Hamer values ==&lt;br /&gt;
&lt;br /&gt;
We test the original reputation web server's algorithm with our scenarios and verify if they match the expected values. The peerlogic server can be accessed via API calls to URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
It uses two algorithms: The hamer-peer algorithm and the Lauw-peer algorithm. Our scope for this project is to test Hamer values as it is already established in previous work that the Hamer algorithm suits our use case better.&lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe &amp;quot;Expertiza&amp;quot; do&lt;br /&gt;
    it &amp;quot;should return the correct Hamer calculation&amp;quot; do&lt;br /&gt;
        uri = URI('http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms')&lt;br /&gt;
    &lt;br /&gt;
        response = Net::HTTP.post(uri, INPUTS, 'Content-Type' =&amp;gt; 'application/json')&lt;br /&gt;
    &lt;br /&gt;
        expect(JSON.parse(response.body)[&amp;quot;Hamer&amp;quot;]).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results????? ===&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
We can see here that the values returned by this webservice don't match the expected values. Hence, we conclude that the webservice is not implemented correctly.&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As seen above, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* coded this algorithm in Ruby in a controller.&lt;br /&gt;
* Included a way for the algorithm to handle nil values.&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'json'&lt;br /&gt;
require 'uri'&lt;br /&gt;
require 'net/http'&lt;br /&gt;
require 'openssl'&lt;br /&gt;
require 'base64'&lt;br /&gt;
&lt;br /&gt;
# Expertiza allows student work to be peer-reviewed, since peers can provide&lt;br /&gt;
# more feedback than the instructor can.&lt;br /&gt;
# However, if we want to assure that all students receive competent feedback,&lt;br /&gt;
# or even use peer-assigned grades,&lt;br /&gt;
# we need a way to judge which peer reviewers are most credible. The solution&lt;br /&gt;
# is the reputation system.&lt;br /&gt;
# Reputation systems have been deployed as web services, peer-review&lt;br /&gt;
# researchers will be able to use them to calculate scores on assignments,&lt;br /&gt;
# both past and present (past data can be used to tune the algorithms).&lt;br /&gt;
#&lt;br /&gt;
# This file is the controller to calculate the reputation scores.&lt;br /&gt;
# A 'reputation' measures how close a reviewer's scores are to other reviewers'&lt;br /&gt;
# scores.&lt;br /&gt;
# This controller implements the calculation of reputation scores.&lt;br /&gt;
class ReputationWebServiceController &amp;lt; ApplicationController&lt;br /&gt;
  include AuthorizationHelper&lt;br /&gt;
&lt;br /&gt;
  # Method: action_allowed&lt;br /&gt;
  # This method checks if the currently authenticated user has the authorization&lt;br /&gt;
  # to perform certain actions&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   true if the user has privileges to perform the action else returns false&lt;br /&gt;
  def action_allowed?&lt;br /&gt;
    current_user_has_ta_privileges?&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_max_question_score&lt;br /&gt;
  # This method receives a set of answers and gets the maximum question score&lt;br /&gt;
  # Params&lt;br /&gt;
  #   answers: set of answers&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   if no error returns max_question_score of first question else 1&lt;br /&gt;
  def get_max_question_score(answers)&lt;br /&gt;
    begin&lt;br /&gt;
      answers.first.question.questionnaire.max_question_score&lt;br /&gt;
    rescue StandardError&lt;br /&gt;
      1&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_valid_answers_for_response&lt;br /&gt;
  # This method receives response and filters the valid answers list of the&lt;br /&gt;
  # response ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   set of valid answers (returns nil if empty)&lt;br /&gt;
  def get_valid_answers_for_response(response)&lt;br /&gt;
    answers = Answer.where(response_id: response.id)&lt;br /&gt;
    valid_answer = answers.select { |answer| (answer.question.type == 'Criterion') &amp;amp;&amp;amp; !answer.answer.nil? }&lt;br /&gt;
    valid_answer.empty? ? nil : valid_answer&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: calculate_peer_review_grade&lt;br /&gt;
  # This method calculates a cumulative review grade with respect to the set of valid answers&lt;br /&gt;
  # Params&lt;br /&gt;
  #   valid_answer: valid answer to get weight of the answer's question&lt;br /&gt;
  #   max_question_score: used to calculate maximum score for peer review grade&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grade&lt;br /&gt;
  def calculate_peer_review_grade(valid_answer, max_question_score)&lt;br /&gt;
    weighted_score_sum = valid_answer.map { |answer| answer.answer * answer.question.weight }.inject(:+)&lt;br /&gt;
    question_weight_sum = valid_answer.sum { |answer| answer.question.weight }&lt;br /&gt;
    peer_review_grade = 100.0 * weighted_score_sum / (question_weight_sum * max_question_score)&lt;br /&gt;
    peer_review_grade.round(4)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews_for_responses&lt;br /&gt;
  # This method calculates the peer review grade for each valid response&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reviewer_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   team_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   valid_response: to get the valid answer for each valid response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grades_list&lt;br /&gt;
  def get_peer_reviews_for_responses(reviewer_id, team_id, valid_response)&lt;br /&gt;
    peer_review_grades_list = []&lt;br /&gt;
    valid_response.each do |response|&lt;br /&gt;
      valid_answer = get_valid_answers_for_response(response)&lt;br /&gt;
      next if valid_answer.nil?&lt;br /&gt;
&lt;br /&gt;
      review_grade = calculate_peer_review_grade(valid_answer, get_max_question_score(valid_answer))&lt;br /&gt;
      peer_review_grades_list &amp;lt;&amp;lt; [reviewer_id, team_id, review_grade]&lt;br /&gt;
    end&lt;br /&gt;
    peer_review_grades_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews&lt;br /&gt;
  # This method retrieves all the reviews for the submissions&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: used to retrieve response map&lt;br /&gt;
  #   round_num: used to retrieve round_num for the valid response&lt;br /&gt;
  #   has_topic: to get the topic condition&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which corresponds to the return of&lt;br /&gt;
  #     get_peer_reviews_for_responses method and appended to the raw_data_array&lt;br /&gt;
  def get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    ReviewResponseMap.where('reviewed_object_id in (?) and calibrate_to = ?', assignment_id_list, false).each do |response_map|&lt;br /&gt;
      reviewer = response_map.reviewer.user&lt;br /&gt;
      team = AssignmentTeam.find(response_map.reviewee_id)&lt;br /&gt;
      topic_condition = ((has_topic &amp;amp;&amp;amp; (SignedUpTeam.where(team_id: team.id).first.is_waitlisted == false)) || !has_topic)&lt;br /&gt;
      last_valid_response = response_map.response.select { |r| r.round == round_num }.max&lt;br /&gt;
      valid_response = [last_valid_response] unless last_valid_response.nil?&lt;br /&gt;
      if (topic_condition == true) &amp;amp;&amp;amp; !valid_response.nil? &amp;amp;&amp;amp; !valid_response.empty?&lt;br /&gt;
        raw_data_array += get_peer_reviews_for_responses(reviewer.id, team.id, valid_response)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_ids_list&lt;br /&gt;
  # This method maps each object to the corresponding object's ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   tables: any table&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   id in the tables&lt;br /&gt;
  def get_ids_list(tables)&lt;br /&gt;
    tables.map(&amp;amp;:id)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_scores&lt;br /&gt;
  # This method gets the quiz score of each participant for respective reviewee&lt;br /&gt;
  # Params&lt;br /&gt;
  #   team_ids: list of team IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which is a list of participant, reviewee and the participant's quiz score&lt;br /&gt;
  def get_scores(team_ids)&lt;br /&gt;
    quiz_questionnnaires = QuizQuestionnaire.where('instructor_id in (?)', team_ids)&lt;br /&gt;
    quiz_questionnnaire_ids = get_ids_list(quiz_questionnnaires)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    QuizResponseMap.where('reviewed_object_id in (?)', quiz_questionnnaire_ids).each do |response_map|&lt;br /&gt;
      quiz_score = response_map.quiz_score&lt;br /&gt;
      participant = Participant.find(response_map.reviewer_id)&lt;br /&gt;
      raw_data_array &amp;lt;&amp;lt; [participant.user_id, response_map.reviewee_id, quiz_score]&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_quiz_score&lt;br /&gt;
  # This method gets the quiz score of assignments&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: returned by get_scores method, which is a list of participant,&lt;br /&gt;
  #     reviewee and the participant's quiz score&lt;br /&gt;
  def get_quiz_score(assignment_id_list)&lt;br /&gt;
    teams = AssignmentTeam.where('parent_id in (?)', assignment_id_list)&lt;br /&gt;
    team_ids = get_ids_list(teams)&lt;br /&gt;
    get_scores(team_ids)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_body&lt;br /&gt;
  # This method generates json body for the peer reviews and quiz scores&lt;br /&gt;
  # Params&lt;br /&gt;
  #   results: list of grades with corresponding team/participant ID,&lt;br /&gt;
  #     reviewee ID and their score&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: returns the formatted body after sorting the hash&lt;br /&gt;
  def generate_json_body(results)&lt;br /&gt;
    request_body = {}&lt;br /&gt;
    results.each_with_index do |record, _index|&lt;br /&gt;
      request_body['submission' + record[1].to_s] = {} unless request_body.key?('submission' + record[1].to_s)&lt;br /&gt;
      request_body['submission' + record[1].to_s]['stu' + record[0].to_s] = record[2]&lt;br /&gt;
    end&lt;br /&gt;
    # sort the 2-dimension hash&lt;br /&gt;
    request_body.each { |k, v| request_body[k] = v.sort.to_h }&lt;br /&gt;
    request_body.sort.to_h&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_peer_reviews&lt;br /&gt;
  # This method retrieves all the peer reviews associated with&lt;br /&gt;
  # the assignment id list by calling the get_peer_reviews method.&lt;br /&gt;
  # It then formats the peer-review list in JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  #   round_num: round number of the review&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with the formatted peer review data.&lt;br /&gt;
  def generate_json_for_peer_reviews(assignment_id_list, round_num = 2)&lt;br /&gt;
    has_topic = !SignUpTopic.where(assignment_id: assignment_id_list[0]).empty?&lt;br /&gt;
&lt;br /&gt;
    peer_reviews_list = get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    request_body = generate_json_body(peer_reviews_list)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_quiz_scores&lt;br /&gt;
  # This method accepts a list of assignment ids as an argument.&lt;br /&gt;
  # It then calls the get_quiz_score method on the list to get&lt;br /&gt;
  # maps of teams and scores for the given assignments.&lt;br /&gt;
  # The map is then formatted into JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with quiz scores&lt;br /&gt;
  def generate_json_for_quiz_scores(assignment_id_list)&lt;br /&gt;
    participant_reviewee_map = get_quiz_score(assignment_id_list)&lt;br /&gt;
    request_body = generate_json_body(participant_reviewee_map)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: client&lt;br /&gt;
  # This method is called when the url reputation_web_service/client&lt;br /&gt;
  # is hit using GET method.&lt;br /&gt;
  # This renders the client.html.erb&lt;br /&gt;
  # It also populates the instance variables to be used in the views&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def client&lt;br /&gt;
    @max_assignment_id = Assignment.last.id&lt;br /&gt;
    @assignment = Assignment.find(flash[:assignment_id]) rescue nil&lt;br /&gt;
    @another_assignment = Assignment.find(flash[:another_assignment_id]) rescue nil&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: update_participants_reputation&lt;br /&gt;
  # This method accepts the response body in the JSON format.&lt;br /&gt;
  # It then parses the JSON and updates the reputation scores of the&lt;br /&gt;
  # participants in the list.&lt;br /&gt;
  # If the alg variable is not  Hamer/ Lauv, the updation step is skipped.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def update_participants_reputation(reputation_response)&lt;br /&gt;
    JSON.parse(reputation_response.body.to_s).each do |reputation_algorithm, user_resputation_list|&lt;br /&gt;
      next unless %w[Hamer Lauw].include?(reputation_algorithm)&lt;br /&gt;
&lt;br /&gt;
      user_resputation_list.each do |user_id, reputation|&lt;br /&gt;
        Participant.find_by(user_id: user_id).update(reputation_algorithm.to_sym =&amp;gt; reputation) unless /leniency/ =~ user_id.to_s&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: process_response_body&lt;br /&gt;
  # This method gets the control after receiving a response from the server.&lt;br /&gt;
  # It receives the response body as an argument&lt;br /&gt;
  # It updates the instance variables related to the response.&lt;br /&gt;
  # It then calls the update_participants_reputation to update the reputation&lt;br /&gt;
  # scores received in the response body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def process_response_body(reputation_response)&lt;br /&gt;
    flash[:response] = reputation_response&lt;br /&gt;
    flash[:response_body] = reputation_response.body&lt;br /&gt;
    update_participants_reputation(reputation_response)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_expert_grades&lt;br /&gt;
  # It prepends the request body with the expert grades pertaining&lt;br /&gt;
  # to the default wiki contribution case of 754.&lt;br /&gt;
  # It receives the request body as an argument and prepends it&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_expert_grades(body)&lt;br /&gt;
    flash[:additional_info] = 'add expert grades'&lt;br /&gt;
    case params[:assignment_id]&lt;br /&gt;
    when '754' # expert grades of Wiki contribution (754)&lt;br /&gt;
      body.prepend('&amp;quot;expert_grades&amp;quot;: {&amp;quot;submission25030&amp;quot;:95,&amp;quot;submission25031&amp;quot;:92,&amp;quot;submission25033&amp;quot;:88,&amp;quot;submission25034&amp;quot;:98,&amp;quot;submission25035&amp;quot;:100,&amp;quot;submission25037&amp;quot;:95,&amp;quot;submission25038&amp;quot;:95,&amp;quot;submission25039&amp;quot;:93,&amp;quot;submission25040&amp;quot;:96,&amp;quot;submission25041&amp;quot;:90,&amp;quot;submission25042&amp;quot;:100,&amp;quot;submission25046&amp;quot;:95,&amp;quot;submission25049&amp;quot;:90,&amp;quot;submission25050&amp;quot;:88,&amp;quot;submission25053&amp;quot;:91,&amp;quot;submission25054&amp;quot;:96,&amp;quot;submission25055&amp;quot;:94,&amp;quot;submission25059&amp;quot;:96,&amp;quot;submission25071&amp;quot;:85,&amp;quot;submission25082&amp;quot;:100,&amp;quot;submission25086&amp;quot;:95,&amp;quot;submission25097&amp;quot;:90,&amp;quot;submission25098&amp;quot;:85,&amp;quot;submission25102&amp;quot;:97,&amp;quot;submission25103&amp;quot;:94,&amp;quot;submission25105&amp;quot;:98,&amp;quot;submission25114&amp;quot;:95,&amp;quot;submission25115&amp;quot;:94},')&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_quiz_scores&lt;br /&gt;
  # It gets the assignment id list and generates the json on quiz scores of&lt;br /&gt;
  # those assignments.&lt;br /&gt;
  # Finally processes quiz string is prepended to the request body, received&lt;br /&gt;
  # as an argument, and returns the body to prepare_request_body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_quiz_scores(body)&lt;br /&gt;
    flash[:additional_info] = 'add quiz scores'&lt;br /&gt;
    assignment_id_list_quiz = get_assignment_id_list(params[:assignment_id].to_i, params[:another_assignment_id].to_i)&lt;br /&gt;
    quiz_str =  generate_json_for_quiz_scores(assignment_id_list_quiz).to_json&lt;br /&gt;
    quiz_str[0] = '' # remove first {&lt;br /&gt;
    quiz_str.prepend('&amp;quot;quiz_scores&amp;quot;:{') # add quiz_scores tag&lt;br /&gt;
    quiz_str += ','&lt;br /&gt;
    quiz_str = quiz_str.gsub('&amp;quot;N/A&amp;quot;', '20.0') # replace N/A values with 20&lt;br /&gt;
    body.prepend(quiz_str)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_hamer_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial hamer reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_lauw_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial lauw reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_assignment_id_list&lt;br /&gt;
  # This method on receipt of individual assignment IDs returns a list with all&lt;br /&gt;
  # the assignment IDs appended into a data structure&lt;br /&gt;
  # This function accepts 2 arguments, with the second argument being optional,&lt;br /&gt;
  # and returns the list assignment_id_list&lt;br /&gt;
  # If the second argument is 0, it is not appended to the list.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_one: first assignment id (required)&lt;br /&gt;
  #   assignment_id_two: second assignment id (optional)&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   assignment_id_list: list containing two assignment ids&lt;br /&gt;
  def get_assignment_id_list(assignment_id_one, assignment_id_two = 0)&lt;br /&gt;
    assignment_id_list = []&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_one&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_two unless assignment_id_two.zero?&lt;br /&gt;
    assignment_id_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_flash_messages&lt;br /&gt;
  # This method sets the flash messages to pass on to the next request i.e&lt;br /&gt;
  # the request redirected to the client&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_flash_messages(post_req)&lt;br /&gt;
    flash[:assignment_id] = params[:assignment_id]&lt;br /&gt;
    flash[:round_num] = params[:round_num]&lt;br /&gt;
    flash[:algorithm] = params[:algorithm]&lt;br /&gt;
    flash[:another_assignment_id] = params[:another_assignment_id]&lt;br /&gt;
    flash[:request_body] = post_req.body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_additional_info_details&lt;br /&gt;
  # This method sets the additional info details based on the options&lt;br /&gt;
  # selected in the additional information section. We populate the request&lt;br /&gt;
  # based on the selections&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_additional_info_details(post_req)&lt;br /&gt;
    if params[:checkbox][:expert_grade] == 'Add expert grades'&lt;br /&gt;
      add_expert_grades(post_req.body)&lt;br /&gt;
    elsif params[:checkbox][:hamer] == 'Add initial Hamer reputation values'&lt;br /&gt;
      add_hamer_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:lauw] == 'Add initial Lauw reputation values'&lt;br /&gt;
      add_lauw_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:quiz] == 'Add quiz scores'&lt;br /&gt;
      add_quiz_scores(post_req.body)&lt;br /&gt;
    else&lt;br /&gt;
      flash[:additional_info] = ''&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: prepare_request_body&lt;br /&gt;
  # This method is responsible for preparing the request body in a proper format&lt;br /&gt;
  # to send to the server. It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # It finally sends the prepared request body back to the send_post_request&lt;br /&gt;
  # method.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def prepare_request_body&lt;br /&gt;
    reputation_web_service_path = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).path&lt;br /&gt;
    post_req = Net::HTTP::Post.new(reputation_web_service_path, { 'Content-Type' =&amp;gt; 'application/json', 'charset' =&amp;gt; 'utf-8' })&lt;br /&gt;
    curr_assignment_id = (params[:assignment_id].empty? ? '754' : params[:assignment_id])&lt;br /&gt;
    assignment_id_list_peers = get_assignment_id_list(curr_assignment_id, params[:another_assignment_id].to_i)&lt;br /&gt;
&lt;br /&gt;
    post_req.body = generate_json_for_peer_reviews(assignment_id_list_peers, params[:round_num].to_i).to_json&lt;br /&gt;
&lt;br /&gt;
    post_req.body[0] = '' # remove the first '{'&lt;br /&gt;
    add_additional_info_details post_req&lt;br /&gt;
    post_req.body.prepend('{')&lt;br /&gt;
    add_flash_messages post_req&lt;br /&gt;
    post_req&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: send_post_request&lt;br /&gt;
  # This method calls the prepare_request_body function to get a prepared&lt;br /&gt;
  # request body in proper format to send to the server.&lt;br /&gt;
  # It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # We redirect to the client url to display the results.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def send_post_request&lt;br /&gt;
    post_req = prepare_request_body&lt;br /&gt;
    reputation_web_service_hostname = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).host&lt;br /&gt;
    reputation_response = Net::HTTP.new(reputation_web_service_hostname).start { |http| http.request(post_req) }&lt;br /&gt;
    if %w[400 500].include?(reputation_response.code)&lt;br /&gt;
      flash[:error] = 'Post Request Failed'&lt;br /&gt;
    else&lt;br /&gt;
      process_response_body(reputation_response)&lt;br /&gt;
    end&lt;br /&gt;
    redirect_to action: 'client'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def calculate_reputation_Score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 4: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results????? ===&lt;br /&gt;
&lt;br /&gt;
=== Conclusion????? ===&lt;br /&gt;
&lt;br /&gt;
== Conclusion ????? ==&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [???/]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [????]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154632</id>
		<title>CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2024_-_E2412._Testing_for_hamer.rb&amp;diff=154632"/>
		<updated>2024-03-25T03:53:38Z</updated>

		<summary type="html">&lt;p&gt;Npatil2: /* Objective 2: Verify the correctness of the reputation web server's Hamer values */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb&lt;br /&gt;
&lt;br /&gt;
== Project Overview ==&lt;br /&gt;
&lt;br /&gt;
=== Problem Statement ===&lt;br /&gt;
&lt;br /&gt;
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
&lt;br /&gt;
* Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.&lt;br /&gt;
* Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
* Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.&lt;br /&gt;
* Validate the accuracy of the newly implemented Hamer algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Files Involved ===&lt;br /&gt;
&lt;br /&gt;
* reimplemented algorithm: /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
* test file: /spec/controllers/reputation_mock_web_server_hamer.rb&lt;br /&gt;
&lt;br /&gt;
=== Mentor ===&lt;br /&gt;
&lt;br /&gt;
* Muhammet Mustafa Olmez (molmez@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
=== Team Members ===&lt;br /&gt;
&lt;br /&gt;
* Neha Vijay Patil (npatil2@ncsu.edu)&lt;br /&gt;
* Prachit Mhalgi (psmhalgi@ncsu.edu)&lt;br /&gt;
* Sahil Santosh Sawant (ssawant2@ncsu.edu)&lt;br /&gt;
&lt;br /&gt;
== Hamer Algorithm ==&lt;br /&gt;
&lt;br /&gt;
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:&lt;br /&gt;
&lt;br /&gt;
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.&lt;br /&gt;
&lt;br /&gt;
2. Grading Process:&lt;br /&gt;
* Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.&lt;br /&gt;
* The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.&lt;br /&gt;
* Initially, all reviewers are given equal weight in the averaging process.&lt;br /&gt;
* The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.&lt;br /&gt;
* The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.&lt;br /&gt;
&lt;br /&gt;
3. Iterative Process:&lt;br /&gt;
* The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.&lt;br /&gt;
* Convergence occurs quickly, typically requiring four to six iterations before a solution (a &amp;quot;fix-point&amp;quot;) is reached.&lt;br /&gt;
&lt;br /&gt;
4. Weight Adjustment:&lt;br /&gt;
* The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.&lt;br /&gt;
* To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.&lt;br /&gt;
&lt;br /&gt;
5. Properties:&lt;br /&gt;
* The algorithm aims to identify and diminish the impact of &amp;quot;rogue&amp;quot; reviewers who may inject random or arbitrary grades into the peer assessment process.&lt;br /&gt;
* By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.&lt;br /&gt;
&lt;br /&gt;
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.&lt;br /&gt;
&lt;br /&gt;
== Hamer value calculation ==&lt;br /&gt;
[[File:Step1.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step2.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step3.PNG|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== Objective 1: Develop code testing scenarios ==&lt;br /&gt;
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:&lt;br /&gt;
* 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)&lt;br /&gt;
* case where reviewer is giving max scores (10) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving min scores (0) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving median scores (5) to all submissions (should be flagged)&lt;br /&gt;
* case where reviewer is giving same scores to all submissions (should be flagged)&lt;br /&gt;
&lt;br /&gt;
=== Object Creation ===&lt;br /&gt;
Below is the Input object for tests that cover all the above scenarios:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
INPUTS_new = {&lt;br /&gt;
    &amp;quot;submission1&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 9&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission2&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;incomplete_review&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 3,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 2,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 4&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission3&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 7,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    },&lt;br /&gt;
      &amp;quot;submission4&amp;quot;: {&lt;br /&gt;
    &amp;quot;maxtoall&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;mintoall&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;mediantoall&amp;quot;: 5,&lt;br /&gt;
    &amp;quot;max_incomplete&amp;quot;: 10,&lt;br /&gt;
    &amp;quot;min_incomplete&amp;quot;: 1,&lt;br /&gt;
    &amp;quot;sametoall&amp;quot;:3,&lt;br /&gt;
    &amp;quot;passing1&amp;quot;: 6,&lt;br /&gt;
    &amp;quot;passing2&amp;quot;: 4,&lt;br /&gt;
    &amp;quot;passing3&amp;quot;: 5&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expected Hamer Values ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXPECTED = {&lt;br /&gt;
    &amp;quot;Hamer&amp;quot;: {&lt;br /&gt;
        &amp;quot;maxtoall&amp;quot;: 2.65,&lt;br /&gt;
        &amp;quot;mintoall&amp;quot;: 2.41,&lt;br /&gt;
        &amp;quot;mediantoall&amp;quot;: 1.03,&lt;br /&gt;
        &amp;quot;incomplete_review&amp;quot;: 2.31,&lt;br /&gt;
        &amp;quot;max_incomplete&amp;quot;: 2.57,&lt;br /&gt;
        &amp;quot;min_incomplete&amp;quot;: 2.48,&lt;br /&gt;
        &amp;quot;sametoall&amp;quot;:1.58,&lt;br /&gt;
        &amp;quot;passing1&amp;quot;: 2.17,&lt;br /&gt;
        &amp;quot;passing2&amp;quot;: 1.73,&lt;br /&gt;
        &amp;quot;passing3&amp;quot;: 1.23,&lt;br /&gt;
    }&lt;br /&gt;
}.to_json&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 2: Verify the correctness of the reputation web server's Hamer values ==&lt;br /&gt;
&lt;br /&gt;
We test the original reputation web server's algorithm with our scenarios and verify if they match the expected values. The peerlogic server can be accessed via API calls to URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.&lt;br /&gt;
It uses two algorithms: The hamer-peer algorithm and the Lauw-peer algorithm. Our scope for this project is to test Hamer values as it is already established in previous work that the Hamer algorithm suits our use case better.&lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe &amp;quot;Expertiza&amp;quot; do&lt;br /&gt;
    it &amp;quot;should return the correct Hamer calculation&amp;quot; do&lt;br /&gt;
        uri = URI('http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms')&lt;br /&gt;
    &lt;br /&gt;
        response = Net::HTTP.post(uri, INPUTS, 'Content-Type' =&amp;gt; 'application/json')&lt;br /&gt;
    &lt;br /&gt;
        expect(JSON.parse(response.body)[&amp;quot;Hamer&amp;quot;]).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results????? ===&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
We can see here that the values returned by this webservice don't match the expected values. Hence, we conclude that the webservice is not implemented correctly.&lt;br /&gt;
&lt;br /&gt;
== Objective 3: Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values. ==&lt;br /&gt;
&lt;br /&gt;
As seen above, the values returned by reputation server do not match the expected values. Hence, we concluded that the PeerLogic Webservice is implemented incorrectly. In this phase, we implemented the algorithm in Ruby as a function in a controller file : /app/controllers/reputation_web_service_controller.rb&lt;br /&gt;
&lt;br /&gt;
=== Changes made in implementation ===&lt;br /&gt;
* converted to ruby&lt;br /&gt;
* handled nil cases, now it accepts&lt;br /&gt;
&lt;br /&gt;
=== Code Snippet ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require 'json'&lt;br /&gt;
require 'uri'&lt;br /&gt;
require 'net/http'&lt;br /&gt;
require 'openssl'&lt;br /&gt;
require 'base64'&lt;br /&gt;
&lt;br /&gt;
# Expertiza allows student work to be peer-reviewed, since peers can provide&lt;br /&gt;
# more feedback than the instructor can.&lt;br /&gt;
# However, if we want to assure that all students receive competent feedback,&lt;br /&gt;
# or even use peer-assigned grades,&lt;br /&gt;
# we need a way to judge which peer reviewers are most credible. The solution&lt;br /&gt;
# is the reputation system.&lt;br /&gt;
# Reputation systems have been deployed as web services, peer-review&lt;br /&gt;
# researchers will be able to use them to calculate scores on assignments,&lt;br /&gt;
# both past and present (past data can be used to tune the algorithms).&lt;br /&gt;
#&lt;br /&gt;
# This file is the controller to calculate the reputation scores.&lt;br /&gt;
# A 'reputation' measures how close a reviewer's scores are to other reviewers'&lt;br /&gt;
# scores.&lt;br /&gt;
# This controller implements the calculation of reputation scores.&lt;br /&gt;
class ReputationWebServiceController &amp;lt; ApplicationController&lt;br /&gt;
  include AuthorizationHelper&lt;br /&gt;
&lt;br /&gt;
  # Method: action_allowed&lt;br /&gt;
  # This method checks if the currently authenticated user has the authorization&lt;br /&gt;
  # to perform certain actions&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   true if the user has privileges to perform the action else returns false&lt;br /&gt;
  def action_allowed?&lt;br /&gt;
    current_user_has_ta_privileges?&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_max_question_score&lt;br /&gt;
  # This method receives a set of answers and gets the maximum question score&lt;br /&gt;
  # Params&lt;br /&gt;
  #   answers: set of answers&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   if no error returns max_question_score of first question else 1&lt;br /&gt;
  def get_max_question_score(answers)&lt;br /&gt;
    begin&lt;br /&gt;
      answers.first.question.questionnaire.max_question_score&lt;br /&gt;
    rescue StandardError&lt;br /&gt;
      1&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_valid_answers_for_response&lt;br /&gt;
  # This method receives response and filters the valid answers list of the&lt;br /&gt;
  # response ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   set of valid answers (returns nil if empty)&lt;br /&gt;
  def get_valid_answers_for_response(response)&lt;br /&gt;
    answers = Answer.where(response_id: response.id)&lt;br /&gt;
    valid_answer = answers.select { |answer| (answer.question.type == 'Criterion') &amp;amp;&amp;amp; !answer.answer.nil? }&lt;br /&gt;
    valid_answer.empty? ? nil : valid_answer&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: calculate_peer_review_grade&lt;br /&gt;
  # This method calculates a cumulative review grade with respect to the set of valid answers&lt;br /&gt;
  # Params&lt;br /&gt;
  #   valid_answer: valid answer to get weight of the answer's question&lt;br /&gt;
  #   max_question_score: used to calculate maximum score for peer review grade&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grade&lt;br /&gt;
  def calculate_peer_review_grade(valid_answer, max_question_score)&lt;br /&gt;
    weighted_score_sum = valid_answer.map { |answer| answer.answer * answer.question.weight }.inject(:+)&lt;br /&gt;
    question_weight_sum = valid_answer.sum { |answer| answer.question.weight }&lt;br /&gt;
    peer_review_grade = 100.0 * weighted_score_sum / (question_weight_sum * max_question_score)&lt;br /&gt;
    peer_review_grade.round(4)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews_for_responses&lt;br /&gt;
  # This method calculates the peer review grade for each valid response&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reviewer_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   team_id: used to create respective element in the peer_review_grades_list&lt;br /&gt;
  #   valid_response: to get the valid answer for each valid response&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   peer_review_grades_list&lt;br /&gt;
  def get_peer_reviews_for_responses(reviewer_id, team_id, valid_response)&lt;br /&gt;
    peer_review_grades_list = []&lt;br /&gt;
    valid_response.each do |response|&lt;br /&gt;
      valid_answer = get_valid_answers_for_response(response)&lt;br /&gt;
      next if valid_answer.nil?&lt;br /&gt;
&lt;br /&gt;
      review_grade = calculate_peer_review_grade(valid_answer, get_max_question_score(valid_answer))&lt;br /&gt;
      peer_review_grades_list &amp;lt;&amp;lt; [reviewer_id, team_id, review_grade]&lt;br /&gt;
    end&lt;br /&gt;
    peer_review_grades_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_peer_reviews&lt;br /&gt;
  # This method retrieves all the reviews for the submissions&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: used to retrieve response map&lt;br /&gt;
  #   round_num: used to retrieve round_num for the valid response&lt;br /&gt;
  #   has_topic: to get the topic condition&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which corresponds to the return of&lt;br /&gt;
  #     get_peer_reviews_for_responses method and appended to the raw_data_array&lt;br /&gt;
  def get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    ReviewResponseMap.where('reviewed_object_id in (?) and calibrate_to = ?', assignment_id_list, false).each do |response_map|&lt;br /&gt;
      reviewer = response_map.reviewer.user&lt;br /&gt;
      team = AssignmentTeam.find(response_map.reviewee_id)&lt;br /&gt;
      topic_condition = ((has_topic &amp;amp;&amp;amp; (SignedUpTeam.where(team_id: team.id).first.is_waitlisted == false)) || !has_topic)&lt;br /&gt;
      last_valid_response = response_map.response.select { |r| r.round == round_num }.max&lt;br /&gt;
      valid_response = [last_valid_response] unless last_valid_response.nil?&lt;br /&gt;
      if (topic_condition == true) &amp;amp;&amp;amp; !valid_response.nil? &amp;amp;&amp;amp; !valid_response.empty?&lt;br /&gt;
        raw_data_array += get_peer_reviews_for_responses(reviewer.id, team.id, valid_response)&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_ids_list&lt;br /&gt;
  # This method maps each object to the corresponding object's ID&lt;br /&gt;
  # Params&lt;br /&gt;
  #   tables: any table&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   id in the tables&lt;br /&gt;
  def get_ids_list(tables)&lt;br /&gt;
    tables.map(&amp;amp;:id)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_scores&lt;br /&gt;
  # This method gets the quiz score of each participant for respective reviewee&lt;br /&gt;
  # Params&lt;br /&gt;
  #   team_ids: list of team IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: which is a list of participant, reviewee and the participant's quiz score&lt;br /&gt;
  def get_scores(team_ids)&lt;br /&gt;
    quiz_questionnnaires = QuizQuestionnaire.where('instructor_id in (?)', team_ids)&lt;br /&gt;
    quiz_questionnnaire_ids = get_ids_list(quiz_questionnnaires)&lt;br /&gt;
    raw_data_array = []&lt;br /&gt;
    QuizResponseMap.where('reviewed_object_id in (?)', quiz_questionnnaire_ids).each do |response_map|&lt;br /&gt;
      quiz_score = response_map.quiz_score&lt;br /&gt;
      participant = Participant.find(response_map.reviewer_id)&lt;br /&gt;
      raw_data_array &amp;lt;&amp;lt; [participant.user_id, response_map.reviewee_id, quiz_score]&lt;br /&gt;
    end&lt;br /&gt;
    raw_data_array&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_quiz_score&lt;br /&gt;
  # This method gets the quiz score of assignments&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment IDs&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   raw_data_array: returned by get_scores method, which is a list of participant,&lt;br /&gt;
  #     reviewee and the participant's quiz score&lt;br /&gt;
  def get_quiz_score(assignment_id_list)&lt;br /&gt;
    teams = AssignmentTeam.where('parent_id in (?)', assignment_id_list)&lt;br /&gt;
    team_ids = get_ids_list(teams)&lt;br /&gt;
    get_scores(team_ids)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_body&lt;br /&gt;
  # This method generates json body for the peer reviews and quiz scores&lt;br /&gt;
  # Params&lt;br /&gt;
  #   results: list of grades with corresponding team/participant ID,&lt;br /&gt;
  #     reviewee ID and their score&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: returns the formatted body after sorting the hash&lt;br /&gt;
  def generate_json_body(results)&lt;br /&gt;
    request_body = {}&lt;br /&gt;
    results.each_with_index do |record, _index|&lt;br /&gt;
      request_body['submission' + record[1].to_s] = {} unless request_body.key?('submission' + record[1].to_s)&lt;br /&gt;
      request_body['submission' + record[1].to_s]['stu' + record[0].to_s] = record[2]&lt;br /&gt;
    end&lt;br /&gt;
    # sort the 2-dimension hash&lt;br /&gt;
    request_body.each { |k, v| request_body[k] = v.sort.to_h }&lt;br /&gt;
    request_body.sort.to_h&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_peer_reviews&lt;br /&gt;
  # This method retrieves all the peer reviews associated with&lt;br /&gt;
  # the assignment id list by calling the get_peer_reviews method.&lt;br /&gt;
  # It then formats the peer-review list in JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  #   round_num: round number of the review&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with the formatted peer review data.&lt;br /&gt;
  def generate_json_for_peer_reviews(assignment_id_list, round_num = 2)&lt;br /&gt;
    has_topic = !SignUpTopic.where(assignment_id: assignment_id_list[0]).empty?&lt;br /&gt;
&lt;br /&gt;
    peer_reviews_list = get_peer_reviews(assignment_id_list, round_num, has_topic)&lt;br /&gt;
    request_body = generate_json_body(peer_reviews_list)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: generate_json_for_quiz_scores&lt;br /&gt;
  # This method accepts a list of assignment ids as an argument.&lt;br /&gt;
  # It then calls the get_quiz_score method on the list to get&lt;br /&gt;
  # maps of teams and scores for the given assignments.&lt;br /&gt;
  # The map is then formatted into JSON.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_list: list of assignment ids to get quiz scores for&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   request_body: request body populated with quiz scores&lt;br /&gt;
  def generate_json_for_quiz_scores(assignment_id_list)&lt;br /&gt;
    participant_reviewee_map = get_quiz_score(assignment_id_list)&lt;br /&gt;
    request_body = generate_json_body(participant_reviewee_map)&lt;br /&gt;
    request_body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: client&lt;br /&gt;
  # This method is called when the url reputation_web_service/client&lt;br /&gt;
  # is hit using GET method.&lt;br /&gt;
  # This renders the client.html.erb&lt;br /&gt;
  # It also populates the instance variables to be used in the views&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def client&lt;br /&gt;
    @max_assignment_id = Assignment.last.id&lt;br /&gt;
    @assignment = Assignment.find(flash[:assignment_id]) rescue nil&lt;br /&gt;
    @another_assignment = Assignment.find(flash[:another_assignment_id]) rescue nil&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: update_participants_reputation&lt;br /&gt;
  # This method accepts the response body in the JSON format.&lt;br /&gt;
  # It then parses the JSON and updates the reputation scores of the&lt;br /&gt;
  # participants in the list.&lt;br /&gt;
  # If the alg variable is not  Hamer/ Lauv, the updation step is skipped.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def update_participants_reputation(reputation_response)&lt;br /&gt;
    JSON.parse(reputation_response.body.to_s).each do |reputation_algorithm, user_resputation_list|&lt;br /&gt;
      next unless %w[Hamer Lauw].include?(reputation_algorithm)&lt;br /&gt;
&lt;br /&gt;
      user_resputation_list.each do |user_id, reputation|&lt;br /&gt;
        Participant.find_by(user_id: user_id).update(reputation_algorithm.to_sym =&amp;gt; reputation) unless /leniency/ =~ user_id.to_s&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: process_response_body&lt;br /&gt;
  # This method gets the control after receiving a response from the server.&lt;br /&gt;
  # It receives the response body as an argument&lt;br /&gt;
  # It updates the instance variables related to the response.&lt;br /&gt;
  # It then calls the update_participants_reputation to update the reputation&lt;br /&gt;
  # scores received in the response body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   reputation_response: The response from the reputation web service&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def process_response_body(reputation_response)&lt;br /&gt;
    flash[:response] = reputation_response&lt;br /&gt;
    flash[:response_body] = reputation_response.body&lt;br /&gt;
    update_participants_reputation(reputation_response)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_expert_grades&lt;br /&gt;
  # It prepends the request body with the expert grades pertaining&lt;br /&gt;
  # to the default wiki contribution case of 754.&lt;br /&gt;
  # It receives the request body as an argument and prepends it&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_expert_grades(body)&lt;br /&gt;
    flash[:additional_info] = 'add expert grades'&lt;br /&gt;
    case params[:assignment_id]&lt;br /&gt;
    when '754' # expert grades of Wiki contribution (754)&lt;br /&gt;
      body.prepend('&amp;quot;expert_grades&amp;quot;: {&amp;quot;submission25030&amp;quot;:95,&amp;quot;submission25031&amp;quot;:92,&amp;quot;submission25033&amp;quot;:88,&amp;quot;submission25034&amp;quot;:98,&amp;quot;submission25035&amp;quot;:100,&amp;quot;submission25037&amp;quot;:95,&amp;quot;submission25038&amp;quot;:95,&amp;quot;submission25039&amp;quot;:93,&amp;quot;submission25040&amp;quot;:96,&amp;quot;submission25041&amp;quot;:90,&amp;quot;submission25042&amp;quot;:100,&amp;quot;submission25046&amp;quot;:95,&amp;quot;submission25049&amp;quot;:90,&amp;quot;submission25050&amp;quot;:88,&amp;quot;submission25053&amp;quot;:91,&amp;quot;submission25054&amp;quot;:96,&amp;quot;submission25055&amp;quot;:94,&amp;quot;submission25059&amp;quot;:96,&amp;quot;submission25071&amp;quot;:85,&amp;quot;submission25082&amp;quot;:100,&amp;quot;submission25086&amp;quot;:95,&amp;quot;submission25097&amp;quot;:90,&amp;quot;submission25098&amp;quot;:85,&amp;quot;submission25102&amp;quot;:97,&amp;quot;submission25103&amp;quot;:94,&amp;quot;submission25105&amp;quot;:98,&amp;quot;submission25114&amp;quot;:95,&amp;quot;submission25115&amp;quot;:94},')&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_quiz_scores&lt;br /&gt;
  # It gets the assignment id list and generates the json on quiz scores of&lt;br /&gt;
  # those assignments.&lt;br /&gt;
  # Finally processes quiz string is prepended to the request body, received&lt;br /&gt;
  # as an argument, and returns the body to prepare_request_body.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   body: The request body to add the expert grades to&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   body prepended with the expert grades&lt;br /&gt;
  def add_quiz_scores(body)&lt;br /&gt;
    flash[:additional_info] = 'add quiz scores'&lt;br /&gt;
    assignment_id_list_quiz = get_assignment_id_list(params[:assignment_id].to_i, params[:another_assignment_id].to_i)&lt;br /&gt;
    quiz_str =  generate_json_for_quiz_scores(assignment_id_list_quiz).to_json&lt;br /&gt;
    quiz_str[0] = '' # remove first {&lt;br /&gt;
    quiz_str.prepend('&amp;quot;quiz_scores&amp;quot;:{') # add quiz_scores tag&lt;br /&gt;
    quiz_str += ','&lt;br /&gt;
    quiz_str = quiz_str.gsub('&amp;quot;N/A&amp;quot;', '20.0') # replace N/A values with 20&lt;br /&gt;
    body.prepend(quiz_str)&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_hamer_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial hamer reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_lauw_reputation_values&lt;br /&gt;
  # This method sets the instance variable @additional_info.&lt;br /&gt;
  # This method is called by the prepare_request_body method&lt;br /&gt;
  # when params receive instruction through the corresponding view's checkbox.&lt;br /&gt;
  # THIS METHOD IS NOT IMPLETEMENTED&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_lauw_reputation_values&lt;br /&gt;
    flash[:additional_info] = 'add initial lauw reputation values'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: get_assignment_id_list&lt;br /&gt;
  # This method on receipt of individual assignment IDs returns a list with all&lt;br /&gt;
  # the assignment IDs appended into a data structure&lt;br /&gt;
  # This function accepts 2 arguments, with the second argument being optional,&lt;br /&gt;
  # and returns the list assignment_id_list&lt;br /&gt;
  # If the second argument is 0, it is not appended to the list.&lt;br /&gt;
  # Params&lt;br /&gt;
  #   assignment_id_one: first assignment id (required)&lt;br /&gt;
  #   assignment_id_two: second assignment id (optional)&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   assignment_id_list: list containing two assignment ids&lt;br /&gt;
  def get_assignment_id_list(assignment_id_one, assignment_id_two = 0)&lt;br /&gt;
    assignment_id_list = []&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_one&lt;br /&gt;
    assignment_id_list &amp;lt;&amp;lt; assignment_id_two unless assignment_id_two.zero?&lt;br /&gt;
    assignment_id_list&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_flash_messages&lt;br /&gt;
  # This method sets the flash messages to pass on to the next request i.e&lt;br /&gt;
  # the request redirected to the client&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_flash_messages(post_req)&lt;br /&gt;
    flash[:assignment_id] = params[:assignment_id]&lt;br /&gt;
    flash[:round_num] = params[:round_num]&lt;br /&gt;
    flash[:algorithm] = params[:algorithm]&lt;br /&gt;
    flash[:another_assignment_id] = params[:another_assignment_id]&lt;br /&gt;
    flash[:request_body] = post_req.body&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: add_additional_info_details&lt;br /&gt;
  # This method sets the additional info details based on the options&lt;br /&gt;
  # selected in the additional information section. We populate the request&lt;br /&gt;
  # based on the selections&lt;br /&gt;
  # Params&lt;br /&gt;
  #   post_req: This contains the entire post_req that needs to be sent to the reputation&lt;br /&gt;
  #     webservice&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def add_additional_info_details(post_req)&lt;br /&gt;
    if params[:checkbox][:expert_grade] == 'Add expert grades'&lt;br /&gt;
      add_expert_grades(post_req.body)&lt;br /&gt;
    elsif params[:checkbox][:hamer] == 'Add initial Hamer reputation values'&lt;br /&gt;
      add_hamer_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:lauw] == 'Add initial Lauw reputation values'&lt;br /&gt;
      add_lauw_reputation_values&lt;br /&gt;
    elsif params[:checkbox][:quiz] == 'Add quiz scores'&lt;br /&gt;
      add_quiz_scores(post_req.body)&lt;br /&gt;
    else&lt;br /&gt;
      flash[:additional_info] = ''&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: prepare_request_body&lt;br /&gt;
  # This method is responsible for preparing the request body in a proper format&lt;br /&gt;
  # to send to the server. It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # It finally sends the prepared request body back to the send_post_request&lt;br /&gt;
  # method.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def prepare_request_body&lt;br /&gt;
    reputation_web_service_path = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).path&lt;br /&gt;
    post_req = Net::HTTP::Post.new(reputation_web_service_path, { 'Content-Type' =&amp;gt; 'application/json', 'charset' =&amp;gt; 'utf-8' })&lt;br /&gt;
    curr_assignment_id = (params[:assignment_id].empty? ? '754' : params[:assignment_id])&lt;br /&gt;
    assignment_id_list_peers = get_assignment_id_list(curr_assignment_id, params[:another_assignment_id].to_i)&lt;br /&gt;
&lt;br /&gt;
    post_req.body = generate_json_for_peer_reviews(assignment_id_list_peers, params[:round_num].to_i).to_json&lt;br /&gt;
&lt;br /&gt;
    post_req.body[0] = '' # remove the first '{'&lt;br /&gt;
    add_additional_info_details post_req&lt;br /&gt;
    post_req.body.prepend('{')&lt;br /&gt;
    add_flash_messages post_req&lt;br /&gt;
    post_req&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Method: send_post_request&lt;br /&gt;
  # This method calls the prepare_request_body function to get a prepared&lt;br /&gt;
  # request body in proper format to send to the server.&lt;br /&gt;
  # It populates the assignment scores and peer review&lt;br /&gt;
  # scores. It also populates the flash messages to send to the next request&lt;br /&gt;
  # We redirect to the client url to display the results.&lt;br /&gt;
  # Params&lt;br /&gt;
  #&lt;br /&gt;
  # Returns&lt;br /&gt;
  #   nil&lt;br /&gt;
  def send_post_request&lt;br /&gt;
    post_req = prepare_request_body&lt;br /&gt;
    reputation_web_service_hostname = URI.parse(WEBSERVICE_CONFIG['reputation_web_service_url']).host&lt;br /&gt;
    reputation_response = Net::HTTP.new(reputation_web_service_hostname).start { |http| http.request(post_req) }&lt;br /&gt;
    if %w[400 500].include?(reputation_response.code)&lt;br /&gt;
      flash[:error] = 'Post Request Failed'&lt;br /&gt;
    else&lt;br /&gt;
      process_response_body(reputation_response)&lt;br /&gt;
    end&lt;br /&gt;
    redirect_to action: 'client'&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  def calculate_reputation_Score&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
# Method: calculate_reputation_score&lt;br /&gt;
# This method calculates the reputation scores for each reviewer based on the provided input data.&lt;br /&gt;
# It first parses the input JSON string to extract the submissions and their corresponding scores.&lt;br /&gt;
# Then, it calculates the average weighted grades per reviewer and the delta R values.&lt;br /&gt;
# Next, it calculates the weight prime values based on the delta R values.&lt;br /&gt;
# Finally, it calculates the reputation weights for each reviewer using the weight prime values.&lt;br /&gt;
#&lt;br /&gt;
# Params&lt;br /&gt;
#   - input_json: a JSON string representing the input data with submission scores&lt;br /&gt;
#&lt;br /&gt;
# Returns&lt;br /&gt;
#   An array of reputation scores, one score per reviewer, indicating their reputation in the system.&lt;br /&gt;
def calculate_reputation_score(reviews)&lt;br /&gt;
  # Parse the input JSON string&lt;br /&gt;
  # reviews = JSON.parse(input_json)&lt;br /&gt;
&lt;br /&gt;
  # Initialize arrays to store intermediate values&lt;br /&gt;
  grades = []&lt;br /&gt;
  delta_r = []&lt;br /&gt;
  weight_prime = []&lt;br /&gt;
  weight = []&lt;br /&gt;
&lt;br /&gt;
  # Calculate Average Weighted Grades per Reviewer&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    assignment_grade_average = reviewer_marks_without_nil.sum.to_f / reviewer_marks_without_nil.length&lt;br /&gt;
    grades &amp;lt;&amp;lt; assignment_grade_average&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate delta R&lt;br /&gt;
  reviews.each do |reviewer_marks|&lt;br /&gt;
    reviewer_delta_r = 0&lt;br /&gt;
    # Skip nil values when calculating the sum&lt;br /&gt;
    reviewer_marks_without_nil = reviewer_marks.compact&lt;br /&gt;
    reviewer_marks_without_nil.each_with_index do |grade, student_index|&lt;br /&gt;
      reviewer_delta_r += (grade - grades[student_index]) ** 2&lt;br /&gt;
    end&lt;br /&gt;
    delta_r &amp;lt;&amp;lt; reviewer_delta_r / reviewer_marks_without_nil.length&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate weight prime&lt;br /&gt;
  average_delta_r = delta_r.sum / delta_r.length.to_f&lt;br /&gt;
&lt;br /&gt;
  delta_r.each do |reviewer_delta_r|&lt;br /&gt;
    weight_prime &amp;lt;&amp;lt; average_delta_r / reviewer_delta_r&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Calculate reputation weight&lt;br /&gt;
  weight_prime.each do |reviewer_weight_prime|&lt;br /&gt;
    if reviewer_weight_prime &amp;lt;= 2&lt;br /&gt;
      weight &amp;lt;&amp;lt; reviewer_weight_prime.round(2)&lt;br /&gt;
    else&lt;br /&gt;
      weight &amp;lt;&amp;lt; (2 + Math.log(reviewer_weight_prime - 1)).round(2)&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # Return the reputation weights&lt;br /&gt;
  weight&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objective 4: Validate the accuracy of the newly implemented Hamer algorithm ==&lt;br /&gt;
&lt;br /&gt;
We test the newly implemented Hamer algorithm function with our scenarios and verify if they match the expected values. &lt;br /&gt;
&lt;br /&gt;
=== Test Code Snippet ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
describe ReputationWebServiceController do&lt;br /&gt;
    it &amp;quot;should calculate correct Hamer calculation&amp;quot; do&lt;br /&gt;
      weights = ReputationWebServiceController.new.calculate_reputation_score(reviews)&lt;br /&gt;
      keys = [&amp;quot;maxtoall&amp;quot;, &amp;quot;mintoall&amp;quot;, &amp;quot;mediantoall&amp;quot;, &amp;quot;incomplete_review&amp;quot;, &amp;quot;sametoall&amp;quot;, &amp;quot;passing1&amp;quot;, &amp;quot;passing2&amp;quot;, &amp;quot;passing3&amp;quot;]&lt;br /&gt;
      rounded_weights = weights.map { |w| w.round(1) }&lt;br /&gt;
      result_hash = keys.zip(rounded_weights).to_h&lt;br /&gt;
      expect(result_hash).to eq(JSON.parse(EXPECTED)[&amp;quot;Hamer&amp;quot;])&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Results????? ===&lt;br /&gt;
&lt;br /&gt;
=== Conclusion????? ===&lt;br /&gt;
&lt;br /&gt;
== Conclusion ????? ==&lt;br /&gt;
&lt;br /&gt;
==GitHub Links==&lt;br /&gt;
Link to Expertiza repository: [https://github.com/expertiza/expertiza here]&lt;br /&gt;
&lt;br /&gt;
Link to the forked repository: [https://github.com/Prachit99/expertiza/tree/main here]&lt;br /&gt;
&lt;br /&gt;
Link to pull request: [???/]&lt;br /&gt;
&lt;br /&gt;
Link to Github Project page: [https://github.com/users/Prachit99/projects/1]&lt;br /&gt;
&lt;br /&gt;
Link to Testing Video: [????]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
1. Expertiza on GitHub (https://github.com/expertiza/expertiza) &amp;lt;br&amp;gt;&lt;br /&gt;
2. The live Expertiza website (http://expertiza.ncsu.edu/) &amp;lt;br&amp;gt;&lt;br /&gt;
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)&lt;/div&gt;</summary>
		<author><name>Npatil2</name></author>
	</entry>
</feed>