CSC/ECE 517 Spring 2013/Final Project E730: Difference between revisions
(Created page with "Software Design Document for Project 3 - E370: Reputation Mark Hall, Dan Howard, Eric Lumpkin, Ashray Nagaraju Table of Contents 1 Introduction a Purpose b Problem Definition 2...") |
No edit summary |
||
Line 1: | Line 1: | ||
= E370. Reputation Integration = | |||
*Contact Info: Mark Hall (mlhall3@ncsu.edu), Dan Howard(drhowar5@ncsu.edu), Eric Lumpkin(eblumpki@ncsu.edu), Ashray Nagaraju (ashray@ncsu.edu) | |||
__TOC__ | |||
Purpose | ==Introduction== | ||
===Purpose=== | |||
In Expertiza, students are able to grade each other’s work. This is accomplished through a system of peer reviews, and subsequently reviewing those reviews. One major problem with this system is that while all reviews are not equal in quality, all reviews are equally weighted. | In Expertiza, students are able to grade each other’s work. This is accomplished through a system of peer reviews, and subsequently reviewing those reviews. One major problem with this system is that while all reviews are not equal in quality, all reviews are equally weighted. | ||
Line 23: | Line 20: | ||
Problem Definition | ===Problem Definition=== | ||
A previously developed algorithm assigns an appropriate weight to the reviews written by any given reviewer. This weight associated with a reviewer is known as his or her reputation. (Hamer’s algorithm) | A previously developed algorithm assigns an appropriate weight to the reviews written by any given reviewer. This weight associated with a reviewer is known as his or her reputation. (Hamer’s algorithm) | ||
Line 35: | Line 32: | ||
■ click on an ‘information’ icon in any fields that need them, such as ‘reviewer inaccuracy’ | ■ click on an ‘information’ icon in any fields that need them, such as ‘reviewer inaccuracy’ | ||
Proposed Design | ==Proposed Design== | ||
○ Add an ‘enabled’ field for each review. This field should default to ‘enabled’, and should have an interface that allows an instructor to disable it. This will allow the instructor to disable ‘rogue’ or ‘outlier’ reviews. | ○ Add an ‘enabled’ field for each review. This field should default to ‘enabled’, and should have an interface that allows an instructor to disable it. This will allow the instructor to disable ‘rogue’ or ‘outlier’ reviews. | ||
○ Implement a toggle that allows an instructor to use either the equally weighted or reputation-based grading systems. This will allow the instructor to choose which weighting system to use at the assignment level. All reviews for a given assignment will use the same weighting system. | ○ Implement a toggle that allows an instructor to use either the equally weighted or reputation-based grading systems. This will allow the instructor to choose which weighting system to use at the assignment level. All reviews for a given assignment will use the same weighting system. | ||
Line 45: | Line 42: | ||
Use Cases | ==Use Cases== | ||
Line 120: | Line 117: | ||
Test Cases | ==Test Cases== | ||
● Ensure that the weighting algorithm returns the correct score for a range of values | ● Ensure that the weighting algorithm returns the correct score for a range of values | ||
● Ensure that switching between the equal weighting and reputation-based weighting calculates and displays the score correctly. | ● Ensure that switching between the equal weighting and reputation-based weighting calculates and displays the score correctly. |
Revision as of 21:31, 22 April 2013
E370. Reputation Integration
- Contact Info: Mark Hall (mlhall3@ncsu.edu), Dan Howard(drhowar5@ncsu.edu), Eric Lumpkin(eblumpki@ncsu.edu), Ashray Nagaraju (ashray@ncsu.edu)
Introduction
Purpose
In Expertiza, students are able to grade each other’s work. This is accomplished through a system of peer reviews, and subsequently reviewing those reviews. One major problem with this system is that while all reviews are not equal in quality, all reviews are equally weighted.
Some students are better at reviewing than others. Additionally, some students have better intentions than others when it comes to writing reviews. In order to avoid unfairly rewarding or penalizing students, an algorithm has been implemented that gives an appropriate weight to a review based upon its merit. This algorithm checks to see how closely the score a reviewer gives is to the average score given by all reviewers. This is used to determine how “good” a reviewer is.
The algorithm that has been implemented is based upon Hamer’s algorithm (http://crpit.com/abstracts/CRPITV42Hamer.html). This algorithm has not been put into production for two main reasons. First, it was not adequately tested. Second is that it needs a user-friendly interface. Our group is tasked with providing adequate testing and a good user interface.
Problem Definition
A previously developed algorithm assigns an appropriate weight to the reviews written by any given reviewer. This weight associated with a reviewer is known as his or her reputation. (Hamer’s algorithm)
Write tests for a previously developed algorithm used in determining and assigning a score for a reviewer’s reputation.
Develop a user-friendly interface whereby an instructor can view the results of and implement the weighting algorithm. The interface should allow an instructor to: ■ view a reviewer’s reputation score ■ enable or disable reviews that are deemed by the instructor to be outliers ■ choose between the weighted or unweighted grading system ■ click on an ‘information’ icon in any fields that need them, such as ‘reviewer inaccuracy’
Proposed Design
○ Add an ‘enabled’ field for each review. This field should default to ‘enabled’, and should have an interface that allows an instructor to disable it. This will allow the instructor to disable ‘rogue’ or ‘outlier’ reviews. ○ Implement a toggle that allows an instructor to use either the equally weighted or reputation-based grading systems. This will allow the instructor to choose which weighting system to use at the assignment level. All reviews for a given assignment will use the same weighting system. ○ Use a tool tip to display a given user’s reputation when the instructor hovers over the reviewer’s name with the mouse pointer. ○ Add Unit/Functional test cases to test the functionality of the algorithm.
Use Cases
Use-case: 1
Name: Turn reviewer weighting on and off in calculations for a review by use of an ‘enabled’ field/toggle.
Actors: Instructor/Admin
Other participants: None
Precondition: There is no weighted grading system enabled
Primary Sequence:
1. Login to Expertiza (Admin or an equivalent person).
2. Click on Create an assignment.
3. Enable weighted grading for the particular assignment by actuating an enabled toggle.
4. Save the changes.
Postcondition: Weighted grading system is enabled for the created assignment
Use-case: 2
Name: Display reviewers reputation by hovering over a field.
Actors: Instructor/Admin
Other participants: None
Precondition: Mouse is not over a score field.
Primary Sequence:
1. Login to Expertiza (Admin or an equivalent person).
2. Click on Create an assignment
3. Hover the mouse over a score field.
Postcondition: The reviewers reputation will be displayed in a tool tip.
Use-case: 3
Name: Disable a rogue review.
Actors: Instructor/Admin
Other participants: None
Precondition: A review has been identified as rogue by the instructor.
Primary Sequence:
1. Login to Expertiza (Instructor or an equivalent person). 2. Navigate to the rogue review. 3. Toggle the review to be disabled. 4. Save the changes.
Postcondition: The rogue review is not included in the average weighted score.
Test Cases
● Ensure that the weighting algorithm returns the correct score for a range of values ● Ensure that switching between the equal weighting and reputation-based weighting calculates and displays the score correctly. ● Ensure that once rogue reviews are detected and counted out they change the scores appropriately. ● Ensure that the correct reputation score is displayed in a tooltip when highlighting a score. ● Ensure that only the instructor can view the reputation scores for students. ● Ensure that only the instructor can mark a review as rogue. ● Ensure that weighted scores are displayed only after deadline, else unweighted scores need to be displayed.