CSC/ECE 517 Fall 2017/E17A7 Allow Reviewers to bid on what to review

From Expertiza_Wiki
Jump to navigation Jump to search

This page describes the the project E17A7 which is one of the several projects to allow Expertiza support (the currently unsupported) conference. Specifcally, it involves adding the ability of conference paper reviewers to bid for what they want to review. The members of this project are:

Leiyang Guo (lguo7@ncsu.edu)

Bikram Singh (bsingh8@ncsu.edu)

Navin Venugopal (nvenugo2@ncsu.edu)

Sathwick Goparapu (sgopara@ncsu.edu)

Introduction

Expertiza is an open source project created using Ruby on Rails. This project is an software primarily to create reusable learning objects through peer review and also supports team projects. Expertiza allows the creation of instructors and student accounts. This allows the instructors to post projects (student learning objects) which can be viewed and worked upon by students. These can also be peer reviewed by students later.

Background of the project

In the existing Expertiza functionality, the bidding ability is only for bidding for a project topic for students in a course. This involves the instructor posting a list of project topics, and each student (or all students together as a team, if some students have already formed a team) then posts a preference list, listing topics (s)he wants to work on. Then the bidding algorithm assigns the topics, particularly with the following features:

  • Students (not in at team) with close preferences are assigned into a project team, and the common preference is the project topic assigned to the team. Alternatively, for existing teams the project topic is assigned as per the common project topic preference list
  • Each team is assigned only one topic
  • Each topic (if assigned) is assigned to a maximum of one team

Description of project

This project is not responsible for adding code so as to support a conference. Rather, we are interested in the bidding algorithm used in the conference, which is significantly different from the project bidding algorithm as explained above.

For the purposes of the project, we assume that there are several reviewers in a conference who review papers which are proposed to be presented in the conference. Also, the entire list of papers proposed to be presented in the conference is also available.

Then the basic working of the project assumes:

  • Before the bidding close deadline, reviewers submit a list of papers they wish to review.
  • After the bidding deadline, the algorithm assigns papers to reviewers to review, such that:
    • Each paper (if assigned) is assigned to a maximum of R reviewers (here R represents some constant)
    • Each reviewer is assigned a maximum of P papers to review (here P represents some constant)
    • Assignment of papers is individual

Project Requirements

In this section we discuss the problem statement, then discuss the existing code and possible changes required.

Problem Statement

  • To take the existing bidding code in Expertiza (which is meant for students bidding for project topics) and make it callable either for bidding for topics or bidding for submissions to review.
  • A possible extension is to add other methods, like a "late reviewer registration" method, in which each new reviewer is assigned one of the least-reviewed submissions so far. Methods like these require an extension of the basic working of the project.
  • The matching algorithm is currently not very sophisticated. Top trading cycles is implemented in the web service (though it is currently not used by bidding assignment), and could be adapted to this use.

In the subsequent discussion with the mentor, it was concluded that the two bidding situations are very different hence it was decided to keep the two separate, at least initially. Also the third requirement was modified to first make the bidding for a conference using any algorithm and if time permits, to use a better algorithm like top trading cycles.

Current Project Aims

  • To develop code such that both applications (bidding for project teams and bidding for conference paper reviews) use the same code
  • To improve the algorithm for calculating the score for a particular project topic/conference paper review assigned to a project team/conference reviewer

We want to state here that we are not responsible for developing any conference features. Specifically, this means that we are not changing any UI features. We will be primarily relying on UI changes done by E17A5

Existing Algorithm

Problem

The existing algorithm solves the following problem:

To form teams of students of Maximum size M and then assign topics (there are N topics) based on preference lists submitted either by individual students or teams (complete or otherwise). Preference Lists can have minimum 1 topic and a maximum of L topics. Then the topics are alloted to teams such that each team gets an unique topic and each topic is assigned only to one team.

Functionality

The topics are assigned to project teams in 2 steps:

  • Making teams: This is done using a k-means clustering and a weighting formula that favors increasing overall student satisfaction and adding members until the maximum allowable team size is reached. You can read about it in detail here.
    • The basic cost function used in the algorithm is: C(i,j) = 1/N * ∑ [ M -S(i) ] where i varies from 1 to N
    • The Algorithm calculates the weights for every user not a team or part of an incomplete team. It then assigns teams using the weights, by plotting graphs for each topic. It is as follows:
      • Draw a graph for every topic j with X axis as the priority position i and the Y axis as the Weights.
      • Hierarchical K means to select teams such that all students in a team are close to each other in the graph above, hopefully more towards the left of the graph, and also such that there are a minimum of 1 and a maximum of M students per team;
    • The Algorithm is as follows:
  
      For every topic j :
         For every position i that topic j can be placed in (any) preference list:
              Let S(i) = number of students who selected topic j at position i of their (individual) preference list
              B(i) = 1/S(i)
              Calculate C(i,j)                
              Weight(i,j) = B(i) / C(i,j)
 
  • Assigning topics to teams: This is implemented as a one line, comparing bid preferences to allot topics.


Use Case Diagram

Actors:

  • Conference Reviewer: Submits a preference list of papers and reviews assigned papers
  • Conference Administrator: Responsible for the assignment of topics to the reviewers

Scenario The case of reviewers submitting a list of preferred topics and the administrator running the assignment process. For our project, the main modification would be concentrating on Use Case 4 and 5.



Choose and submit preference to be reviewed

  • Use Case Id: 1
  • Use Case Description: Participants choose the preference for conference review topics and submit it
  • Actors: Participants
  • Pre Conditions: Conference papers are presented and submitted, and the participants are eligible for reviewing
  • Post Conditions: Conference committee members can view participants preference and run bidding algorithm on it

Saving bidder information into database

  • Use Case Id: 2
  • Use Case Description: bidding preferences and related participants information are processed and saved to database
  • Actors: None
  • Triggered by: Use Case 1
  • Pre Conditions: participants preferences are submitted
  • Post Conditions: information can be retrieved and used by bidding algorithm

View list of topics

  • Use Case Id: 3
  • Use Case Description: participants can view list of topic available for conference paper topic
  • Actors: Participants

Run bidding process

  • Use Case Id: 4
  • Use Case Description: Committee members can run bidding algorithm on application to help assigning the conference paper topics to participants
  • Actors: Conference Committee
  • Pre Conditions: preferences must be submitted by participants
  • Post Conditions: the bidding result can be used for paper assignment

Choosing and assigning paper to bidders

  • Use Case Id: 5
  • Use Case Description: System assigns participants to conference paper topics according to bidding result
  • Actors: None
  • Triggered by: Use Case 4
  • Pre Conditions: bidding algorithm has run and result has been returned
  • Post Conditions: Participants can view topics been assigned to them

Change assignment of review using CRUD actions

  • Use Case Id: 6
  • Use Case Description: Conference committee members can change assignment result manually
  • Actors: Conference Committee
  • Pre Conditions: topic assignment has been done
  • Post Conditions: changes on bidding result is visible to participants and other committee members

Data Flow Diagram

Below is the Data Flow Diagram for process flows of the project. The diagram shows the process of bidding algorithm that we proposed to use for conference paper review assignment.

Design

In the following subsections, we discuss the problem, proposed design and code. We are trying to combine the code so that it can be used for both the project topic bidding as well as the conference paper review bidding.

The Combined Problem

The combined problem statement is as follows: Given a list of people (students or reviewers) and also a list of N items to bid on (project topics or conference paper reviews), we require the following:

  • To form teams of Maximum size M
  • Based on preference lists submitted by people/teams having a minimum of 1 item and a maximum of L topics, to allot topics to teams such that each team gets P items and each item (to be bid on) is assigned to R teams.

Points of Discussion

We note the differences:

  • For Bidding for Assignment Topics, M is generally greater then 1, but P = 1 and R = 1. The item to be bid on is the project topics.
  • For Bidding for Conference Paper Reviews, M = 1 but P and R are generally greater then 1. The item to be bid on is the conference paper reviews.

We note that a team of 1 person makes little sense, but we still implement it to be so that the code is compatible for both the applications.

Proposed Design

Keeping the existing algorithm and the project requirements in mind, we decided to divide the code into 2 parts:

  • Part A: Make teams for people not in a team (and if applicable, complete incomplete teams)
  • Part B: Assign topics to teams

We note that Part A has been implemented on a web service independent of Expertiza. We also note that the algorithm explained above applies only to Part A, and that it is pretty sophisticated. Hence we choose to keep the existing code as is for this part. We will simply call the webservice, provide the list of people, their bidding preference and also the max_team_size.

Part B is implemented as one line. It is pretty simple. We also note that it simply cannot be used for conference paper review assignemnt. Hence we completely change this section and implement it using a new algorithm.

Proposed Algorithm for Part B

Let N = Number of topics

   B = Base score
   t = Total number of topics an user prefers


For every topic i:

    den = find_den
    for every user j:
        if(user has preferred this topic)
             r = priority(topic)
             num = (N + 1 - r) * B * N
             score[topic][user] = num/den + B
        if(user has preferred no topic)
             score[topic][user] = B
        if(user has not preferred this topic)
             num = B * N
             score[topic][user] = B - num/den

Now find_den = ∑ [ N + 1 ] - ∑ [k] where the iterating variable k varies from 1 to t

Changing Existing Files

This sub section details what is to be changed in the existing code.

Files to be changed

  • lottery_controller.rb

Current Code

This is current code for bidding assignment. The code will basically get target assignment and then get all topics and teams for the assignment. Then it will assign priority of each team-topic pair according to user preference for each user, and append it to priority information list. Afterward it sends priority information along with maximum team size information to web-server and rearranging the priority order according to top-trading cycle algorithm, and pass returned result to create_new_teams_for_bidding_response function for bidding handling on topics with preferences and run_intelligent_bid function for bidding handling for leftover assignment as well as cleaning up.

 def run_intelligent_assignment
   priority_info = []
   assignment = Assignment.find_by(id: params[:id])
   topics = assignment.sign_up_topics
   teams = assignment.teams
   teams.each do |team|
     # grab student id and list of bids
     bids = []
     topics.each do |topic|
       bid_record = Bid.find_by(team_id: team.id, topic_id: topic.id)
       bids << (bid_record.nil? ? 0 : bid_record.priority ||= 0)
     end
     team.users.each { |user| priority_info << { pid: user.id, ranks: bids } if bids.uniq != [0] }
   end
   data = { users: priority_info, max_team_size: assignment.max_team_size }
   url = WEBSERVICE_CONFIG["topic_bidding_webservice_url"]
   begin
     response = RestClient.post url, data.to_json, content_type: :json, accept: :json
     # store each summary in a hashmap and use the question as the key
     teams = JSON.parse(response)["teams"]
     create_new_teams_for_bidding_response(teams, assignment)
     run_intelligent_bid(assignment)
   rescue => err
     flash[:error] = err.message
   end
   redirect_to controller: 'tree_display', action: 'list'
 end

Proposed Code

Then we will change the run_intelligent_assignment function to implementation as below:

Additional Changes

With this change, we will have to add a new function: run_conference_bid(teams, assignment), with pseudo-code as shown below:

 def run_conference_bid teams, assignment, topic_per_team, team_per_topic
   set topic_set to false;
   for each team in teams
     append the team in team_list
     retrieve bidding ranks of each team
     sum all rank values in list
     for each topic priority/rank value in ranks list
       if topic_set is false
          append topic_id to topic_list
       end if
       calculate percentage of the rank value in comparison to sum of rank values 
       if sum is 0 
         change percentage to 1/length_of_rank_list
       end if
       score rank value by doing (percent_rank_values *(number_of_topics^2) + (number_of_topics^2) / length_of_rank_list)
       append [team_id, score] to team_scores dictionary with team_id as key
     end for
     set topic_set to true
   end for
   sort team_scores dictionary by values/scores in descending order
   for score in team_scores
     if team not in team_list or topic not in topic_list
       remove score
       continue to next iteration
     end if
     append topic_id to topic_assignment list
     if number of occurrences of current topic in topic_assignment is same as topic_per_team
       remove this topic_id from topic_list
     end if
     append team_id to team_assignment list
     if number of occurrences of current team in team_assignment is same as team_per_topic
       remove this team_id from team_list
     end if
     append topic_id and team_id pair to team_bids
   end for
   save sign up information stored in team_bids to database
 end def

Test Plan

Manual Testing

  • UI testing of the implemented functionality to be done.
   1. Log in as a participants/student
   2. Go to Assignment
   3. Go to Other's works and check preferred conference paper topics to bid on reviewing
   4. Click submit and wait for bidding deadline
   5. After deadline reaches, go back to assignment and click on Other's work to view topics assigned for reviewing

Automated Test Cases

  • TDD and Feature Test cases to be written. For instance we would do Rspec test on cases below:
    • Test on score calculation, see if the calculation gives right scoring distribution when no preference is given, when one preference is given, and when multiple preferences are given;
    • Test on preference levels matching, check if the algorithm is match first, second, third, and so on preferences and evaluating them correctly;
    • Test if teams per topic or topics per team can exceed required number by assigning many team with same first preference and assigning many team with multiple preferences (preference number per team > number of topics/2);
  • For the Rspec tests, we will have to create object of team, assignment, and ranks. A sample team object in JSON format would be:
 {"users":[{"ranks":[1,0,2,3], "pid":1023}],"max_team_size":1}
  • The objects would be generated inside the spec file lottery_controller_spec.rb using model object initializer;

Edge cases

  • Case 1: No reviewer submits a list of preferred topics to review
  • Case 2: All reviewers submits exactly the same list of topics to review.