CSC/ECE 517 Spring 2024 - E2412. Testing for hamer.rb
This page describes the changes made for the Spring 2024 Program 3: First OSS project E2412. Testing for hamer.rb
Project Overview
Problem Statement
The practice of using student feedback on assignments as a grading tool is gaining traction among university professors and courses. This approach not only saves instructors and teaching assistants considerable time but also fosters a deeper understanding of assignments among students as they evaluate their peers' work. However, there is a concern that some students may not take their reviewing responsibilities seriously, potentially skewing the grading process by assigning extreme scores such as 100 or 0 arbitrarily. To address this issue, the Hamer algorithm was developed to assess the credibility and accuracy of reviewers. It generates reputation weights for each reviewer, which instructors can use to gauge their reliability or incorporate into grading calculations. Our goal here is to test this Hamer algorithm.
Objectives
- Develop code testing scenarios to validate the Hamer algorithm and ensure the accuracy of its output values.
- Verify the correctness of the reputation web server's Hamer values by accessing the URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms.
- Reimplement the algorithm if discrepancies arise in the reputation web server's Hamer values.
- Validate the accuracy of the newly implemented Hamer algorithm.
????????Files Involved
- reputation_web_server_hamer.rb
- reputation_mock_web_server_hamer.rb
Mentor
- Muhammet Mustafa Olmez (molmez@ncsu.edu)
Team Members
- Neha Vijay Patil (npatil2@ncsu.edu)
- Prachit Mhalgi (psmhalgi@ncsu.edu)
- Sahil Santosh Sawant (ssawant2@ncsu.edu)
Hamer Algorithm
The grading algorithm described in the paper is designed to provide a reward to reviewers who participate effectively by allocating a portion of the assignment mark to the review, with the review mark reflecting the quality of the grading. Here's an explanation of the algorithm:
1. Review Allocation: Each reviewer is assigned a number of essays to grade. The paper suggests assigning at least five essays, with ten being ideal. Assuming each review takes 20 minutes, ten reviews can be completed in about three and a half hours.
2. Grading Process:
- Once the reviewing is complete, grades are generated for each essay and weights are assigned to each reviewer.
- The essay grades are computed by averaging the individual grades from all the reviewers assigned to that essay.
- Initially, all reviewers are given equal weight in the averaging process.
- The algorithm assumes that some reviewers will perform better than others. It measures this by comparing the grades assigned by each reviewer to the averaged grades. The larger the difference between the assigned and averaged grades, the more out of step the reviewer is considered with the consensus view of the class.
- The algorithm adjusts the weighting of the reviewers based on this difference. Reviewers who are closer to the consensus view are given higher weights, while those who deviate significantly are given lower weights.
3. Iterative Process:
- The calculation of grades and weights is an iterative process. Each time the grades are calculated, the weights need to be updated, and each change in the weights affects the grades.
- Convergence occurs quickly, typically requiring four to six iterations before a solution (a "fix-point") is reached.
4. Weight Adjustment:
- The weights assigned to reviewers are adjusted based on the difference between the assigned and averaged grades. Reviewers with larger discrepancies have their weights adjusted inversely proportional to this difference.
- To prevent excessively large weights, a logarithmic dampening function is applied, allowing weights to rise to twice the class average before further increases are awarded sparingly.
5. Properties:
- The algorithm aims to identify and diminish the impact of "rogue" reviewers who may inject random or arbitrary grades into the peer assessment process.
- By adjusting reviewer weights based on their grading accuracy, the algorithm aims to improve the reliability of the grading process in the presence of such rogue reviewers.
Overall, the algorithm seeks to balance the contributions of different reviewers based on the accuracy of their grading, ultimately aiming to produce reliable grades for each essay in a peer assessment scenario.
Hamer value calculation
Objective 1: Develop code testing scenarios
We assumed 9 reviewers to review 4 submissions each to cover the following test scenarios:
- 3 cases where reviewers are giving credible scores (passing1, passing2, passing3)
- case where reviewer is giving max scores (10) to all submissions (should be flagged)
- case where reviewer is giving min scores (0) to all submissions (should be flagged)
- case where reviewer is giving median scores (5) to all submissions (should be flagged)
- case where reviewer is giving same scores to all submissions (should be flagged)
Object Creation
Below is the Input object for tests that cover all the above scenarios:
INPUTS_new = { "submission1": { "maxtoall": 10, "mintoall": 1, "mediantoall": 5, "incomplete_review": 4, "max_incomplete": 10, "sametoall":3, "passing1": 10, "passing2": 10, "passing3": 9 }, "submission2": { "maxtoall": 10, "mintoall": 1, "mediantoall": 5, "incomplete_review": 2, "max_incomplete": 10, "min_incomplete": 1, "sametoall":3, "passing1": 3, "passing2": 2, "passing3": 4 }, "submission3": { "maxtoall": 10, "mintoall": 1, "mediantoall": 5, "sametoall":3, "passing1": 7, "passing2": 4, "passing3": 5 }, "submission4": { "maxtoall": 10, "mintoall": 1, "mediantoall": 5, "max_incomplete": 10, "min_incomplete": 1, "sametoall":3, "passing1": 6, "passing2": 4, "passing3": 5 } }.to_json
Expected Hamer Values ?????
EXPECTED = {
"Hamer": { "mediantoall": 0.6, "passing3": 3.6, "passing2": 1.1, "passing1": 1.1, "maxtoall": 0, "mintoall": 0, "incomplete_review": 0, "sametoall":0, }
}.to_json
Objective 2: Verify the correctness of the reputation web server's Hamer values
We test the original reputation web server's algorithm with our scenarios and verify if they match the expected values. The peerlogic server can be accessed via API calls to URL http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms. It uses two algorithms: The hamer-peer algorithm and the Lauw-peer algorithm. Our scope for this project is to test Hamer values as it is already established in previous work that the Hamer algorithm suits our use case better.
Test Code Snippet
describe "Expertiza" do it "should return the correct Hamer calculation" do uri = URI('http://peerlogic.csc.ncsu.edu/reputation/calculations/reputation_algorithms') response = Net::HTTP.post(uri, INPUTS, 'Content-Type' => 'application/json') expect(JSON.parse(response.body)["Hamer"]).to eq(JSON.parse(EXPECTED)["Hamer"]) end end
Results?????
Conclusion?????
Edge Cases & Scenarios
We present these scenarios as possible test cases for an accurately working Peerlogic webservice.
1) Reviewer gives all max scores
2) Reviewer gives all min scores
3) Reviewer completes no review
- alternative scenario - reviewer gives max scores even if no inputs
These have not been implemented as there is no point in testing a system further when positive flows do not work. However, the code in the Initial Phase Section can be used to analytically calculate correct responses for future assertions. We have provided outputs to these scenarios below:
Coverage
We believe that after our edge cases are implemented for a working Peerlogic, and the assertions pass, that test coverage can then be adequately measured.
At this moment, test coverage is not a relevant statistic as no positive or negative flows functions correctly, as do any edge cases.
Conclusion
We as a team figured out the algorithms and applications and write some test scenarios. However, we did not have chance to work on web service since it does not work due to module errors. What we had is undefined method strip on Reputation Web Service Controller. Although sometimes it works on Expertiza team side, we were not able to see the web service working. We created some test scenarios and write a python code for simulate the algorithm.
- 1. In the code segment written to simulate the hamer.rb algorithm as described in "A Method of Automatic Grade Calibration in Peer Assessment" by John Hamer Kenneth T.K. Ma Hugh H.F. Kwong ::::(https://crpit.scem.westernsydney.edu.au/confpapers/CRPITV42Hamer.pdf), we take a list of reviewers and their grades for each assignment reviewed to compute the associated reputation weight. Since the algorithm described in the ::paper does not specify an original weight for first time reviewers, we coded it so the first time reviewers had an original weight of 1. In addition, this code does not have reviewer weights added in for reviewers who already ::have reputation weights but will be added in soon. Also, we followed the algorithm they mentioned in the paper to the dot, but even then the output values they wrote as the example did not match what we computed by hand and by ::code. In this situation, either we missed something completely or the algorithm has been changed. As we tested on the peerlogic and mock, current web-service is not correct, since the returned values do not match the expected ::values as can be seen in the picture. This can be what we are supposed to reach in this project.
- 2. In addition, we also found out that the reputation_web_service_controller.rb currently is broken and needs refactoring. While the client side of the reputation web service page runs, any attempt to submit grades to the ::reputation web server side results in an error.
- 3. We provided scenarios for future teams to implement once Peerlogic is running correctly.
- 4. We mocked an accurate webservice and showed what the expected JSON should be like.
GitHub Links
Link to Expertiza repository: here
Link to the forked repository: here
Link to pull request: here
Link to Github Project page: here
Link to Testing Video: here
References
1. Expertiza on GitHub (https://github.com/expertiza/expertiza)
2. The live Expertiza website (http://expertiza.ncsu.edu/)
3. Pluggable reputation systems for peer review: A web-service approach (https://doi.org/10.1109/FIE.2015.7344292)