CSC/ECE 517 Fall 2025 - E2567. Review rubrics varying by round

From Expertiza_Wiki
Jump to navigation Jump to search

About Expertiza

Expertiza is an open source project based on Ruby on Rails framework. Expertiza allows the instructor to create new assignments and customize new or existing assignments which the students can then form teams in Expertiza to work on various projects and assignments. Students can also peer review other students' submissions. Expertiza supports submission across various document types, including the URLs and wiki pages. Students can also view their grades for the projects once the instructors have graded them.

Introduction

The current Expertiza system restricts assignments to a single review rubric that applies uniformly across all topics and rounds. This limitation constrains instructors’ ability to differentiate grading based on project type or review phase, reducing the flexibility and pedagogical richness of the system. To address these issues, projects E2557 and E2567 aim to enhance rubric management by allowing rubric customization per topic type and variation of rubrics across review rounds, improving the precision and fairness of assessments within Expertiza.

Problem Statement

The current version of Expertiza uses a single rubric for all projects and all review rounds within an assignment. This design poses two major challenges:

Lack of Differentiation by Project Type In courses such as CSC/ECE 517, students undertake a variety of projects—front-end UI redesigns, back-end reimplementations, refactoring exercises, and testing frameworks. Despite their differing goals and evaluation criteria, all these projects are assessed using the same rubric. For example, a front-end project may prioritize usability and accessibility, while a refactoring project should emphasize code quality and maintainability. A single rubric fails to capture these distinct priorities, resulting in unfair or inconsistent evaluation.

Inflexibility Across Review Rounds Multi-round reviews often shift focus across iterations. Instructors may wish to emphasize functionality and completeness in early rounds, and style or documentation in later ones. Yet the system currently enforces the same rubric for every review round, limiting nuanced feedback and preventing progressive evaluation aligned with assignment milestones.

These issues collectively reduce grading accuracy, hinder customized assessment, and make the feedback process less meaningful to students and instructors alike.

Requirements

  • Enable instructors to assign different rubrics to specific topics and review rounds within the same assignment.
  • Add new database fields: topic_id, vary_by_round, and used_in_round to manage rubric associations.
  • Dynamically display rubric selectors (dropdowns) for each topic and review round on the assignment edit page.
  • Automatically determine the number of review rounds from due_dates and link each round to its assigned rubric.
  • Ensure backward compatibility with existing single-rubric assignments.
  • Implement comprehensive unit and integration tests to verify correct rubric assignment and retrieval.

    Links

    • Git PR (front-end) -
    • Git PR (back-end) -
    • GitHub Repository (front-end) -
    • GitHub Repository (back-end) -
    • Demo Video -
    • Website -

    Problem and Goal

    Design

    Back-end

    Front-end

    Testing