CSC/ECE 517 Spring 2013/OSS E734

From Expertiza_Wiki
Revision as of 01:55, 2 April 2013 by Hliu11 (talk | contribs) (Created page with "[https://docs.google.com/a/ncsu.edu/document/d/11YTjxFXFR13vJ769yFBbqn9qOueK2Ktz0Pd-gyS2W5U/edit# Write-up of This Topic.] This page is the design documentation for Expertiza pr...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Write-up of This Topic.

This page is the design documentation for Expertiza project E734 conducted in the CSC/ECE 517 Spring 2003 class at North Carolina State University. Learn more about Expertiza by visiting its main page: http://wikis.lib.ncsu.edu/index.php/Expertiza


Team Members

Hao Liu (hliu11@ncsu.edu)

Zhuowei Wang (zwang18@ncsu.edu)

Chunxue Yang (cyang14@ncsu.edu)


Expertiza Project E734: Analytics Proposal

The Expertiza project is system for using peer review to create reusable learning objects. Students do different assignments; then peer review selects the best work in each category, and assembles it to create a single unit.

The process of reviewing includes reviews, re-reviews and meta reviews of assignments submitted by students for a course. This method of evaluation also inherently suggests that there will be vagaries in the way reviews have been performed. Thus, by mining such information, Expertiza can be improved by leveraging the results that are obtained.

A previous Expertiza project (E732) has added a new module that helps the instructor design courses on Expertiza that will facilitate building a better learning environment. On the base of E732, our project aims at helping assignment authors and reviewers make a better use of the reviewing process and get a more comprehensive feedback from the review information.

Objective

To fulfill our purpose of providing a better experience for Expertiza users, we have several different aspects of to focus on. The primary focus of this project is on reviews and the related aspects that go hand in hand with reviews like meta reviews, re-reviews etc. To be more specific, we will be working on an advanced search interface in which different criteria can be selected to make a comparison for each review or meta review.

To start with, our application will answer the following questions:

a. What information can the instructor and the assignment author gain from scores and text for different questions of a particular review/meta review?

b. What information can the instructor and the assignment author gain from all the scores and text for the same question of all the related reviews?

c. What information can the instructor and the assignment author gain from the changes of scores regarding the same review question in review, re-review, and meta review?

d. Who has the access to the above information?

Project Description

Each of the above mentioned questions can be handled as follows:

  • a. What information can the instructor and the assignment author gain from scores and text for different questions of a particular review/meta review?

Rubrics are essentially a predetermined set of questions provided to every reviewer in order for them to be able to provide a feedback on the work they are assessing. For a particular question in a review, we would be therefore taking into account the following two parameters namely score assigned by the reviewer for question of a review, and the text associated with each score. Analyzing the above mentioned parameters, the instructor and the assignment author would be able to come up with an overall idea on how the different aspect of the assignment is evaluated and what improvement is needed.

  • b. What information can the instructor and the assignment author gain from all the scores and text for the same question of all the related reviews?

A particular question in a review may be related to one aspect of the evaluation of the assignment. Since a single feedback could be parochial and subjective, collecting and analyzing all the feedbacks from all the reviews could help the instructor and the assignment author get a better understanding about how the work is evaluated. By calculating the average score of a particular question, we will also give weight to all the feedbacks. By comparing each score to the average score, the feedbacks with scores closer to the average are considered more impartial and the text from these feedbacks should be more valuable.

  • c. What information can the instructor and the assignment author gain from the changes of scores regarding the same review question in review, re-review, and meta review?

With each review performed, the assignment author will gain feedbacks from reviews and come up with some improvements according to the scores and comments. These improvements can be presented in the resubmission so that the re-review should reflect how these improvement is evaluated. Thus analyzing the trend in scores of different reviews (review, re-review and meta review) we would be able to come up with a general idea on how well the author understands former reviews and gets useful suggestions from former reviews.

  • d. Who has the access to the above information?

Keep the above information only accessible for the instructor and the assignment author. Try to make this interface as flexible as possible while keeping in mind security vulnerabilities such as SQL injection attacks.

Requirements

  • Instructors or Teaching assistants must be able to see a comparison among different aspects in a review.
  • Instructors or Teaching assistants must be able to see a comparison among different feedbacks towards the same question in all reviews of an assignments.
  • Instructors or Teaching assistants must be able to see the trend of the score changes with respect to time, which is from review to re-review and finally to meta review.
  • All the above information should be kept to the instructor, TAs and the respecting author.