CSC/ECE 517 Spring 2026 - E2609. Review calibration
E2609 — Review calibration
Project scope
Calibration helps assess reviewer competence: the instructor pre-reviews a submission, students review the same work, and the system compares their scores. This project implements the Calibration tab on the assignment editor, where teaching staff can add calibration participants by username. Adding a participant creates an AssignmentParticipant, an AssignmentTeam, and an instructor ReviewResponseMap marked for_calibration: true. The tab lists all calibration participants along with their submitted hyperlinks and files. A calibration report page shows each student's scores compared to the instructor's, rendered as a stacked bar chart (agree / near / disagree per rubric item) and a per-reviewer rubric detail view. Assigning calibration reviews to students and the full instructor review form SPA are out of scope for this project; demo seeding is used in place of the review form while it is deferred.
Problem statements
- Distinguish calibration from normal peer review.
response_maps.for_calibration(boolean, default false) marks calibration maps. Response maps record who reviews whom (reviewer_id,reviewee_id, assignment). - Instructor workflow. Teaching staff add participants, see list rows with submission summary, open View report, and (when the real review UI exists) use Begin to submit an instructor calibration
Response. Until then, a mock path may materialize submitted responses for demos. - Reporting. For one calibration reviewee team, aggregate latest submitted student calibration
Responserows (per student map), compare to the instructor's submitted response, and compute per-item buckets for the chart. - Authorization. Calibration participant management and report access are restricted to teaching staff for the assignment (instructor / TA as defined by existing auth helpers).
- Controller boundaries (SRP). Participant/map mutations live on
ReviewMappingsController. The comparison report is a read-only report onReportsControllerso new report types do not bloat feature controllers. - Student-facing comparison links — Listing calibration maps for the logged-in student reviewer and deep-linking into comparison JSON may be future work; E2609 focuses on staff designation of submitters and the comparison view for staff.
Design goals
- Single Responsibility —
ReviewMappingsController: staff calibration CRUD.ReportsController: calibration report GET. Business logic lives in models and service objects, not controllers. - Information Expert — Each class handles data it owns:
Assignmentorchestrates participant/team/map creation;ResponseMapreports its own review status and latest submitted response;Responselooks up its own answer for a given item;Questionnaireprovides its own score range. - Iterator over collection — The report pipeline never bulk-loads all responses into memory.
ReviewResponseMap.peer_calibration_responses_eachyields oneResponseat a time (backed byfind_each). - DRY —
calibration_forscope onReviewResponseMapcentralises thefor_calibrationquery condition.Questionnaire#score_rangeowns score-range defaults. - Testable — Request specs cover the HTTP contract; model specs cover domain methods; service unit specs cover the report pipeline in isolation.
- Reversible demo code — Demo seeding isolated in gitignored files; a removal checklist lives in the service file comments.
Database design
response_maps.for_calibration
- Type: boolean, default
false,NOT NULL. - When
true: map is part of calibration for that assignment (instructor→calibration submitter's team, or student→same reviewee team for calibration comparisons).
Domain summary
- Calibration submitter — Real user, phantom participant row: added on the Calibration tab; gets an
AssignmentTeamso they can submit through normal submission flows (instructor impersonates to submit artifacts—no separate submission code required beyond existing submission infrastructure). - Instructor calibration map —
ReviewResponseMap:reviewed_object_id= assignment id,reviewer_id= instructor'sAssignmentParticipantid,reviewee_id= calibration submitter's team id,for_calibration= true. - Student calibration maps — Same
reviewee_idas the instructor map; differentreviewer_id;for_calibration= true. (Creation/assignment of these maps is out of project scope; demo seeding may create a small set for chart data.)
Domain model
Assignment |-- has_many AssignmentParticipant (instructor, students, calibration submitters) |-- has_many AssignmentTeam | +-- calibration_participant_rows # lists all calibration rows (delegates to ReviewResponseMap) +-- add_calibration_submitter!(user) # orchestrates participant + team + map in a transaction +-- find_or_add_participant!(user) # idempotent participant lookup/create ReviewResponseMap (< ResponseMap) |-- scope :calibration_for(assignment) # WHERE reviewed_object_id=? AND for_calibration=true |-- .peer_calibration_responses_each(instructor_map) # iterator, yields one Response at a time |-- #calibration_participant_json(instructor_user_id:) # serialises one calibration row ResponseMap |-- #review_status # :not_started | :in_progress | :submitted |-- #latest_submitted_response # most recent is_submitted=true Response Response |-- #answer_for(item) # looks up this response's Answer for a given rubric Item |-- #rubric_items # ordered Items for this response's questionnaire |-- #as_calibration_json # serialises for calibration report JSON Questionnaire |-- #score_range # returns min_question_score..max_question_score (with defaults)
Backend architecture
Controllers
Controllers parse params, authorise the request, delegate to models/services, and render JSON. No domain logic lives in controllers.
ReviewMappingsController ReportsController
GET calibration_participants GET calibration/:map_id
POST calibration_participants --> Reports::CalibrationReport.new(map).render
DEL calibration_participants/:id
|
v
@assignment.calibration_participant_rows (list)
@assignment.add_calibration_submitter!(user) (add)
AssignmentTeam + ReviewResponseMap (remove)
Assignment model
Assignment is the aggregate root for calibration participant management. All participant/team/map creation happens in a single database transaction here.
POST /calibration_participants
--> ReviewMappingsController#add_calibration_participant
--> assignment.add_calibration_submitter!(user)
DB transaction:
find_or_add_participant!(user) # AssignmentParticipant
AssignmentTeam.team(participant) # find or create team
ReviewResponseMap.find_or_create! # instructor for_calibration map
returns: calibration_participant_json row
Report pipeline
Reports::CalibrationReport assembles the calibration JSON by walking peer responses one at a time—never bulk-loading all records into memory.
Reports::CalibrationReport
setup
load instructor's submitted Response
load rubric Items from questionnaire
initialise @bucket_counts { item_id => { "0"=>0, "1"=>0, ... } }
each_response
ReviewResponseMap.peer_calibration_responses_each(instructor_map)
find_each peer maps (same assignment + reviewee, for_calibration=true)
yield map.latest_submitted_response
accumulate(response)
response.scores.each --> @bucket_counts[item_id][score] += 1
payload
{ map_id, rubric_items, instructor_response,
student_responses, per_item_summary, submitted_content }
ReviewResponseMap
scope :calibration_for(assignment)
WHERE reviewed_object_id = assignment.id AND for_calibration = true
.peer_calibration_responses_each(instructor_map) [iterator]
peer_maps = same assignment + same reviewee + for_calibration, excluding instructor map
peer_maps.find_each do |map|
yield map.latest_submitted_response # one at a time, memory-efficient
end
#calibration_participant_json(instructor_user_id:)
team, submitter = reviewee, team.participants.first
{ participant_id, username, full_name, team_id,
instructor_review_map_id: id,
instructor_review_status: review_status, # from ResponseMap
submissions: team.submitted_content_detail }
ResponseMap
#review_status responses.empty? --> :not_started responses.where(is_submitted: true).any? --> :submitted else --> :in_progress #latest_submitted_response responses.where(is_submitted: true).order(updated_at: :desc).first
Questionnaire
Score bounds live on the model that owns the data; report code calls questionnaire.score_range rather than hardcoding 0..5.
#score_range (min_question_score || DEFAULT_MIN_QUESTION_SCORE).to_i .. (max_question_score || DEFAULT_MAX_QUESTION_SCORE).to_i
Frontend architecture
- Assignment editor — Calibration tab — Add/remove/list rows; columns for participant, submitted content, review status, View report, and Begin.
- Calibration report page (
CalibrationReview) — FetchesGET …/reports/calibration/:mapId; normalises viacalibrationReportNormalize; renders two tabs over the same report JSON:- Class comparison (stacked) tab —
CalibrationStackedChart(Recharts stacked bars). One bar per rubric item, divided into Agree (student score = instructor), Near (±1), and Disagree (>1 away). Segment counts are derived fromper_item_summary.bucket_countsbycalibrationReportNormalize.ts. - Rubric detail tab —
CalibrationRubricDetailPanel. Instructor selects a student reviewer from a dropdown; each rubric item shows a card with instructor score, student score, difference ("Matches instructor" or "N above/below"), both text comments, and a mini class-distribution chart (CalibrationRubricDistributionChart) with per-item agree/near/disagree counts across all students.
- Class comparison (stacked) tab —
Flow (instructor)
- Open assignment editor → Calibration tab.
- Enter username → Add → backend ensures participant, team, instructor calibration map.
- List refreshes; instructor impersonates calibration users to submit work (existing submission UX).
- Begin — In full product: navigate to rubric review; save/submit instructor
Response. In current demo: POST mock endpoint to seed responses. - View report — SPA route to report page; GET report JSON; stacked chart + summary.
API reference
| Method | Path | Controller action | Notes |
|---|---|---|---|
GET |
/assignments/:id/review_mappings/calibration_participants |
ReviewMappingsController#list_calibration_participants |
Staff only. Returns { assignment_id, calibration_participants: [...] }.
|
POST |
/assignments/:id/review_mappings/calibration_participants |
ReviewMappingsController#add_calibration_participant |
Body: { username }. Idempotent — re-adding the same user is safe. Returns 201 with the participant row.
|
DELETE |
/assignments/:id/review_mappings/calibration_participants/:participant_id |
ReviewMappingsController#remove_calibration_participant |
Staff only. Destroys all for_calibration maps for that participant.
|
GET |
/assignments/:id/reports/calibration/:map_id |
ReportsController#calibration |
Staff only. Returns report JSON: rubric items, instructor response, student responses, per-item bucket counts, submitted content. |
Report JSON shape
{
map_id: Integer,
assignment_id: Integer,
reviewee_id: Integer,
rubric_items: [{ id, txt, seq, weight }],
instructor_response: { id, answers: [{ item_id, score, comments }] },
student_responses: [{ id, reviewer_name, answers: [...] }],
per_item_summary: [{
item_id, item_label, item_seq,
instructor_score, instructor_comment,
bucket_counts: { "0": n, "1": n, ..., "5": n },
student_response_count
}],
submitted_content: { hyperlinks: [...], files: [...] }
}
Aggregation logic
- For each rubric
Item, take the instructor's score from the submitted instructorResponse. - For each student calibration map for the same
reviewee_id, use the latest submittedResponse(byupdated_ator version). - Bucket each student score vs instructor: agree (same), near (±1), disagree (farther). Counts feed the stacked chart per criterion.
UML diagram

Instructors and students are AssignmentParticipants. The instructor's calibration map is a ReviewResponseMap with for_calibration=true whose reviewer is the instructor participant and whose reviewee is the calibration submitter's team. Student calibration maps share the same reviewee_id but have different reviewer_id values. Response and Answer store scores. Reports::CalibrationReport assembles the comparison JSON via an iterator over peer responses.
Flow: Review calibration
The calibration workflow has two parallel lanes. In the staff lane, teaching staff use the assignment editor to designate calibration submitters, submit an instructor review, and view the comparison report. In the student lane, students encounter calibration review maps mixed in with their regular reviews and submit them through the normal response path — they cannot tell the difference.
Staff lane
- Teaching staff open the assignment editor and navigate to the Calibration tab.
- They enter a username and click Add. The
ReviewMappingsControllerdelegates toassignment.add_calibration_submitter!, which atomically creates theAssignmentParticipant,AssignmentTeam, and instructorReviewResponseMap(withfor_calibration: true) in a single database transaction. - The instructor submits their calibration review response by clicking Begin and completing the rubric form.
- Clicking View report navigates to the
CalibrationReviewReact page, which fetches the report JSON fromReportsController#calibrationand renders the stacked comparison chart and rubric detail view.
Student lane
- Students retrieve their review assignments, which include
for_calibrationmaps mixed in with regular reviews. - Students open and submit their calibration reviews through the normal
ResponseandAnswerpath — no special UI is required.
- Data rule
- All calibration maps for a given reviewee share the same
reviewee_id. The report uses the latest submittedResponseper student map; earlier versions and unsaved drafts are ignored.
Pipeline overview

Report request

Assignment editor

GET calibration_participants on ReviewMappingsController; adding the same username again is idempotent.Calibration report: stacked chart

per_item_summary.bucket_counts in the report JSON returned by ReportsController#calibration.Calibration report: rubric detail

Demo and temporary code
Because the full instructor review form is out of scope, a POST …/:map_id/mock_instructor_response route (tagged DEMO_INSTRUCTOR_RESPONSE) seeds a submitted instructor Response via Demo::CalibrationInstructorSeeder, and a rake task (lib/tasks/calibration_demo.rake) populates a local dataset so the report UI can be demonstrated end-to-end. Both the route and the service file live in gitignored paths and should be deleted once the real review form is integrated.
Testing
- Request specs (HTTP contract)
spec/requests/api/v1/calibration_participants_spec.rb— list, add (atomic creation), remove, idempotence, 400/404/403 error cases.spec/requests/api/v1/reports_calibration_spec.rb— report JSON shape, latest-response selection, 404/422/403 error cases.
- Service unit spec
spec/services/calibration_per_item_summary_spec.rb— exercisesReports::CalibrationReport#renderdirectly (no HTTP): bucket count accumulation, latest-response selection,InstructorResponseMissingerror.
- Model specs
spec/models/response_map_spec.rb—ResponseMap#review_status(:not_started / :in_progress / :submitted transitions).spec/controllers/review_mappings_controller_spec.rb— strategy-based mapping actions (round-robin, random, CSV, grade).
- Spec environment
spec/rails_helper.rbforcesRAILS_ENV=testunder Docker so DatabaseCleaner never truncates the development database.
Future work
- Student-facing review list with links to calibration comparison.
- Optional: overall calibration score, trends, export.
References
- Expertiza wiki main page
- Response maps wiki — structure of
ResponseMapResponse Maps Wiki - Front-end pull request: expertiza/reimplementation-front-end#177
- Back-end pull request: expertiza/reimplementation-back-end#341
Team
- Mentor: Dr. Ed Gehringer
- Team members: Xiangjun Mi, Rujuta Palimkar, Emma Hassler