CSC/ECE 517 Fall 2019 - E1993 Track Time Between Successive Tag Assignments: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
Line 8: Line 8:
#[https://github.com/expertiza/expertiza/pull/1309 E1872] tried to solve this by incorporating the statistics in the review reports page, but their UI made the page cluttered and not friendly. Further, it was hard to identify which statistic belonged to which review, and there were almost no tests. ([http://wiki.expertiza.ncsu.edu/index.php/CSC/ECE_517_Fall_2018/E1872_Track_Time_Students_Look_At_Other_Submissions Wiki])
#[https://github.com/expertiza/expertiza/pull/1309 E1872] tried to solve this by incorporating the statistics in the review reports page, but their UI made the page cluttered and not friendly. Further, it was hard to identify which statistic belonged to which review, and there were almost no tests. ([http://wiki.expertiza.ncsu.edu/index.php/CSC/ECE_517_Fall_2018/E1872_Track_Time_Students_Look_At_Other_Submissions Wiki])


== ''' Problem Statement ''' ==
== ''' Project Description ''' ==
CSC/ECE 517 classes have helped us by “tagging” review comments over the past two years.  This is important for us, because it is how we get the “labels” that we need to train our machine-learning models to recognize review comments that detect problems, make suggestions, or that are considered helpful by the authors.  Our goal is to help reviewers by telling them how helpful their review will be before they submit it.
 
Tagging a review comment usually means sliding 4 sliders to either side, depending on which of four attributes it has.  But can we trust the tags that students assign?  In past semesters, our checks have revealed that some students appear not to be paying much attention to the tags they assign: the tags seem to be unrelated to the characteristic they are supposed to rate, or they follow a set pattern, like repeated patterns of one tag yes, then one tag no.
Studies on other kinds of “crowdwork” have shown that the time spent between assigning each label indicates how careful the labeling (“tagging”) has been.  We believe that students who tag “too fast” are probably not paying enough attention, and want to set their tags aside to be examined by course staff and researchers..


== ''' Proposed Approach ''' ==
== ''' Proposed Approach ''' ==

Revision as of 17:13, 10 November 2019

Introduction

The Expertiza project takes advantage of peer-review among students to allow them to learn from each other. Tracking the time that a student spends on each submitted resources is meaningful to instructors to study and improve the teaching experience. Unfortunately, most peer assessment systems do not manage the content of students’ submission within the systems. They usually allow the authors submit external links to the submission (e.g. GitHub code / deployed application), which makes it difficult for the system to track the time that the reviewers spend on the submissions.

Current Implementation

So far, Expertiza does not have any such feature. However, three teams have already worked on this in the past but their builds were not merged due to some problems.

  1. E1705 identified how to track the active time of windows opened from the submitted links. (Wiki)
  2. E1791 provided detailed insights on how they planned to track time taken by a student in viewing a submission and possible edge cases. Further, they also implemented popups and figured out a way to open downloadable files. However, the details are rendered in a not-so-friendly manner and hence it was not merged. (Wiki)
  3. E1872 tried to solve this by incorporating the statistics in the review reports page, but their UI made the page cluttered and not friendly. Further, it was hard to identify which statistic belonged to which review, and there were almost no tests. (Wiki)

Project Description

CSC/ECE 517 classes have helped us by “tagging” review comments over the past two years. This is important for us, because it is how we get the “labels” that we need to train our machine-learning models to recognize review comments that detect problems, make suggestions, or that are considered helpful by the authors. Our goal is to help reviewers by telling them how helpful their review will be before they submit it.

Tagging a review comment usually means sliding 4 sliders to either side, depending on which of four attributes it has. But can we trust the tags that students assign? In past semesters, our checks have revealed that some students appear not to be paying much attention to the tags they assign: the tags seem to be unrelated to the characteristic they are supposed to rate, or they follow a set pattern, like repeated patterns of one tag yes, then one tag no. Studies on other kinds of “crowdwork” have shown that the time spent between assigning each label indicates how careful the labeling (“tagging”) has been. We believe that students who tag “too fast” are probably not paying enough attention, and want to set their tags aside to be examined by course staff and researchers..

Proposed Approach

Proposed Test Plan

Automated Testing Using Rspec

Coverage

Manual UI Testing

Our Work

The code we created can be found below.

The project could be run locally by cloning the GitHub Repository and then running the following commands sequentially.

bundle install
rake db:create:all
rake db:migrate
rails s

Team Information

  1. Anmol Desai
  2. Dhruvil Shah
  3. Jeel Sukhadia
  4. YunKai "Kai" Xiao

Mentor: Mohit Jain

References

  1. Expertiza on GitHub
  2. RSpec Documentation