CSC/ECE 517 Spring 2021 - E2100. Tagging report for students

From Expertiza_Wiki
Revision as of 01:52, 16 March 2021 by Smdupor (talk | contribs) (Created page with "This page details project documentation for the Spring 2021, "E2100 Tagging report for students" project, which aims to assist students with finding, and completing, incomplet...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This page details project documentation for the Spring 2021, "E2100 Tagging report for students" project, which aims to assist students with finding, and completing, incomplete "review tags" on an assignment using a dynamically generated heatgrid.


Introduction to Expertiza

Expertiza is an Open Source Rails application which is used by instructors and students for creating assignments and submitting peer reviews. Expertiza allows the instructor to create and customize assignments, create a list of topics the students can sign up for, have students work on teams and then review each other's assignments at the end. The Source code of the application can be cloned from Github.

About Review Tagging

Review "tags" are a form of feedback on Expertiza where students "tag" (classify) text from peer reviews based on parameters specific to each tag deployment. The parameters can include helpfulness, positivity, suggestions, whether a review offered mitigation, and other parameters depending on the Answer Tag Deployment. These labeled data are then made available to Expertiza researchers for use in developing Natural Language Processing (NLP) / Machine Learning (ML) algorithms. Tagging is only collected for reviews where the text is a sufficient length to be useful as labeled data for NLP research.

Problem Statement

It can be difficult for students to find a tag they missed on the Team View page, and other teams are working with ML algorithms to "pre-tag" as many reviews as possible, leading to a granular field of completed/incomplete tags. For example, an assignment with two rounds of reviewing, ten questions per review, twelve reviews, and a 5-parameter tag deployment could contain as few as zero or as many as one thousand, two hundred tag prompts for a single student to complete.

At this time, the only tagging feedback students see are the tag prompts and a Javascript counter with a numeric representation of how many tags have not been completed. In order to find a missed tag, students have to scroll the page and manually search for tags that aren't done.

In order to help students complete all the tags for an assignment, we propose a new, dynamically generated heatgrid on the "Your Scores" view that breaks down reviews and tags by round, question, and review, which uses visual cues to help students find incomplete tags. The heatgrid shows both a total count of "complete" out of "available" tags, and individual tags with color-scaled feedback for completeness.

The new heatgrid must handle these requirements, and display visual feedback accordingly:

  • Single and multiple round reviews
  • Reviews without tag prompts
  • Dynamically updates as tags are completed
  • Tag deployments with different numbers of tags per review
  • Use of existing review row-sort functionality
  • Not rely on the database for tag data, as AnswerTag objects are not created until tags are clicked.
  • Give useable feedback for all users (characters as well as color-coding)
  • Only be shown when a tags have been deployed to an assignment
  • Show the total progress of tagging reviews.


Implementation

Our proposed solution is a visual feedback aid which exists purely for student users. For this reason, and to facilitate dynamic updates, we have chosen to implement this functionality entirely on the client-side of the application using Javascript and jQuery. The presence or absence of tag prompts are detected at page load, and the heat grid is rendered as appropriate.

Flow Diagram File:FlowDiagram

Heat grid for a two-round review File:FlowDiagram