CSC/Independent Study Automatic Evaluation of Peer Reviews

From Expertiza_Wiki
Jump to navigation Jump to search

Evaluate Using LLMs Integration

Team Information

Mentor:

  • Ed Gehringer

Team Members:

  • Jay Shah (jjshah)
  • Rushabh Shah (rsshah7)
  • Uchswas Paul (upaul)

Relevant Links:

Link to Expertiza Repository

Link to Forked Repository

Link to Pull Request

Expertiza Background

Expertiza is an open-source project built with the Ruby on Rails MVC framework. It facilitates peer reviews and collaborative learning activities in educational environments.

About "Evaluate Using LLMs" Feature

  • This project adds a new "Evaluate using LLM" feature to Expertiza.
  • It enables instructors and TAs to automate the evaluation of peer reviews using an external LLM (Language Model) API.
  • The evaluated scores and comments are editable, similar to the manual review report.

Problem Statement

Prior to this work:

  • Review quality was graded manually by instructors/TAs.
  • No automation or AI-assistance existed to assess reviews.

The goal:

  • Send all review data to an external LLM.
  • Retrieve scores and comments.
  • Populate the review report dynamically.
  • Allow instructor/TAs to edit or overwrite the LLM evaluation.

Tasks

  • Add "Evaluate using LLM" option in report dropdown.
  • Create service object (LlmEvaluationService) for API communication.
  • Build new partial view (_llm_evaluation_report.html.erb).
  • Allow editing of scores and comments returned by LLM.
  • Integrate LLM report with the existing report framework.
  • Use a stubbed (fake) API response for now.

Plan of Work

  • Study how Review Report (ReviewResponseMap) is generated.
  • Create service to send review data to API and receive evaluation.
  • Develop view partial for LLM Evaluation Report.
  • Modify report dropdown to include "Evaluate using LLM" option.
  • Update ReportsController to support LLM Evaluation.

Implementation

Service Object

File: app/services/llm_evaluation_service.rb

Collects review data, calls dummy API, returns structured evaluation.

New View

File: app/views/reports/_llm_evaluation_report.html.erb

Displays LLM-evaluated results.

Editable fields (Score, Comment).

Exportable to CSV.

Controller Modification

File: app/controllers/reports_controller.rb

Method Added: llm_evaluation_report

Dropdown Modification

File: app/views/reports/_searchbox.html.erb

Added "Evaluate using LLM" to the report selection.

API Stub

Fake JSON response mimicking future LLM output.

Fake API Response

 [ { "reviewer_name": "John Doe", "reviewer_id": 101, "team_reviewed": "Team Alpha", "reviews_done": 3, "score_awarded": 90, "avg_score": 88, "metrics": "Volume: 45 words", "grade_for_reviewer": 95, "comment_for_reviewer": "Great feedback with actionable suggestions." } ] 

Test Plan

  • Verified dropdown functionality.
  • Verified table rendering and editable fields.
  • Confirmed dummy API data is properly injected.
  • Ensured no effect on existing reports (review, feedback, calibration, etc.)

Future Work

  • Replace dummy API with real LLM API integration.
  • Allow asynchronous API calls for large assignments.
  • Save updated instructor edits after LLM evaluation.