CSC/ECE 517 Fall 2020 - E2080. Track the time students look at other submissions

From Expertiza_Wiki
Revision as of 04:23, 12 November 2020 by Lcmcconn (talk | contribs) (→‎Flowchart)
Jump to navigation Jump to search

Introduction

The Expertiza project takes advantage of peer-review among students to allow them to learn from each other. Tracking the time that a student spends on each submitted resources is meaningful to instructors to study and improve the teaching experience. Unfortunately, most peer assessment systems do not manage the content of students’ submission within the systems. They usually allow the authors submit external links to the submission (e.g. GitHub code / deployed application), which makes it difficult for the system to track the time that the reviewers spend on the submissions.

Problem Statement

Expertiza allows students to peer review the work of other students in their course. To ensure the quality of the peer reviews, instructors would like to have the ability to track the time a student spends on a peer-review. These metrics need to be tracked and displayed in a way that the instructor is able to gain valuable insight into the quality of a review or set of reviews.

Various metrics will be tracked including

  1. The time spent on the primary review page
  2. The time spent on secondary/external links and downloadables

Previous Implementations

  1. E1791 were able to implement the tracking mechanism for recording the time spent on looking at submissions that were of the type of links and downloadable files. The time spent on links were tracked using window popups when they are opened and closed. The downloadable files of the type text and images were displayed on a new HTML page to track the time spent on viewing that. Each time a submission link or downloadable file is clicked by the reviewer, a new record is created in the database. This causes a lot of database operations which degrades the performance of the application. Also, the way in which the results are displayed is not user-friendly. Other than these issues, the team provided a good implementation of the feature.
  2. E1872 started with E1971's implementation as their base and tried to display the results in a tabular format. The table included useful statistics but it is being displayed outside the review report to the right side which does not blend in with the review report table. Also, the table is hard to read as a lot of information is presented in a cluttered manner. It is hard to map each statistic with its corresponding review. Furthermore, the team did not include any tests.
  3. E1989 is the most recent implementation, built from earlier designs, features a solid UI and ample tracking of review time across the board. They started off with project E1791 as their base and focused on displaying the results in a user-friendly manner. For this purpose, the results are displayed in a new window so that it does not look cluttered. The issue of extensive database operations still remains as future work in their project.

Proposed Solution

  • From the suggestions of the previous team, E1989, we plan to improve their implementation by reducing the frequency of database queries and insertions. In E1989's current implementation every time a start time is logged for expertiza/link/file, a new entry is created in the database. As a result the submission_viewing_event table increases in size very rapidly as it stores start and end times for each link if a particular event occurs. The solution is to save all entries locally on the users system and once the user presses Save or Submit, save the entry in the database.
  • Secondly, E1989's implementation has a decent UI that effectively displays the necessary information to the user. We plan to add minor improvements to their UI to try to improve usability.
  • The previous team also mentioned other issues involving the "Save review after 60 seconds" checkbox feature, that may be looked into in the case of extra time.

Design pattern

Flowchart

The following is the flowchart presented by E1989 that does a good job outlining the logic for tracking the time students spend reviewing:

Code Changes

TBD

Test Plan

Automated Testing Using RSpec

The main feature that we want to test is that the intermediate review timings that we are planning to store on the browser's local storage are being accurately stored or not. There are already tests written to test the total time tracking feature. We will be adding additional tests to test the storage and retrieval of the intermediate timings.

Manual UI Testing

Our major focus for this project is to change the current implementation to use significantly less number of database operations by storing the intermediate timings in the local storage of the browser and write only the final time spent on viewing the submissions. So, the only manual UI testing that can be performed is to check if the total time spent by students is accurately tracked and can be viewed by the instructor.

This can be tested using the following steps:

  1. Log in to Expertiza as a student and go to the assignments tab.
  2. Click on a particular assignment and review the submission by click on links and spending some time viewing them.
  3. After reviewing is done, submit the review.
  4. Log out from the student account.
  5. To look at the time spent on the reviews, log in as the instructor.
  6. Click on the assignments tab.
  7. Click on the review report icon beside the specific assignment that you reviewed using the student's account.
  8. You should be able to see the total time spent on the submissions by the student.

Helpful Links

  1. Our Fork

Identified Issues

TBD

Team Information

  1. Luke McConnaughey (lcmcconn)
  2. Pedro Benitez (pbenite)
  3. Rohit Nair (rnair2)
  4. Surbhi Jha (sjha6)

Mentor: Yunkai 'Kai' Xiao (yxiao28)
Professor: Dr. Edward F. Gehringer (efg)

References

  1. Expertiza on GitHub
  2. RSpec Documentation