CSC/ECE 517 Fall 2019 - E1975. Generalize Review Versioning

From Expertiza_Wiki
Jump to navigation Jump to search

About Expertiza

Expertiza is an open source project based on Ruby on Rails framework and the code is available on Github. Expertiza allows the instructor to create new assignments as well as edit new or existing assignments. Instructors can also create a list of topics the students can sign up for and specify deadlines for completion of various tasks. Students can form teams in Expertiza to work on various projects and assignments as well as peer review other students' submissions. Expertiza supports submission across various document types, including the URLs Wiki pages.

Introduction

Expertiza is a peer-review based learning platform in which work-products by students are evaluated and scored by peer-review of other students. Review process occurs in stages. After submission of initial feedback for a work-product by the peer-group, students are allowed time to incorporate the comments and improve their work. This second version of the work product is reviewed again and scores are given. This process might be repeated again and the average score from the last review stage is considered as the final score for that work-product. This incremental development of work-products and progressive learning is the fundamental concept underlying the Expertiza system.

Purpose

The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The system also triggers an email to reviewer when there is a change in the submission. The scores are calculated on the fly based on the rubrics.

Scope

This project will enhance the existing scoring and peer review functionality and the underlying implementation along with the testing of the existing implementation. Testing of the score calculation, display for the weighted and non-weighted scores based on the combination of the rubrics are part of this project. This project also includes an additional feature for the author to notify the reviewers for the change in the submission.

Background

The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The scores are calculated based on the rubrics defined in the system. There’s already an existing implementation for the computation of score but it doesn’t work for multipart rubric

Documentation

The documents that will be generated for this project are the design document, the actual files that we edit or create, and a ReadMe that explains what we did and how to use the modified product.

Standards

The new standards that will be adhered to in this system are those given for the rails framework, and the code should follow rails and object oriented design principles. Any newly added code will adhere to the guidelines mentioned below:

https://github.com/rubocop-hq/rails-style-guide
http://wiki.expertiza.ncsu.edu/index.php/Object-Oriented_Design_and_Programming​.

System Requirements

Problem

The system currently is designed to give a reviewer a new form for each round of review (rather than requiring the reviewer to edit an existing file) and automatically remove scores of reviews that are not redone. The functionality needs to be tested. It works in some cases, but we are suspicious that it is not entirely correct.

Profile Email Preference: User in expertiza has liberty to control what kind of emails they should receive. As the reviewers will receive an email for the change in submission. This can be controlled by selecting an option (When someone else submits work I am assigned to review) on the profile page of the user. As the checkboxes are far away on the right corner of the screen, the position of checkboxes need to be changed to make it more user friendly.

Scores for students in Expertiza can be based on a combination of rubrics: review rubrics for each round of review, author feedback, and teammate review. In CSC/ECE 517, we don’t really use the student-assigned scores to grade projects, but the peer-review score shown to the student is based entirely on the second round of review

Here is how scores are calculated:

1. If only zero-weighted scores are available (here, in the first round of review), the average of all zero-weighted scores should be shown in grey font. An information button should explain why the scores are shown in grey (“The reviews submitted so far do not count toward the author’s final grade.”):

2. If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is::

   80%⨉90% + 100%⨉10% = 82%

3. If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. ​However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. This functionality doesn’t work for multipart rubrics.

The system also doesn’t generate emails to reviewers in case a new submission is made by the author without changing submission parameters like submission URL. Eg: A new submission is made by making a new deployment. But the deployment URL doesn’t change. In this case as well, without changing the current flow the author should have an option to notify all the reviewers to revisit the submission.

Interface Requirement

The overall user interface would remain the same as of the current system. Since this is a modification to the current submission page, there will be a new addition to the submission page. On the submission page, there will be a new button to notify reviewers when there is a submission which doesn’t contain a change to submission parameters like submission URL, but the deployment has been updated on the same URL. Other than that the interface remains the same for the score page, only functionalities will be tested for multi-part rubrics.

Quality Requirements

The interface changes are minimal and very easy to adapt. The new button will be clearly visible and is supposed to have a clear text to interpret its functionality. The proposed changes should not affect the current system functionality in any way. The solution is only intended to enrich the project features.

Portability Requirements

All of the changes for this enhancement to Expertiza should not interfere with the platform portability in any way.

Performance Requirements

Adding test for multi-part rubric functionality should not cause any overhead in the performance of the system. Sending emails through “Notify All Reviewers” button should affect the system minimally in terms of performance. The only overhead is a database query to get all the reviewers. A good database design should incorporate such a query smoothly.

Assumptions

The functionality should work in case the assignment rubric is not multi-part.


Design

Zero-weighted scores

If only zero-weighted scores are available in the first round of review, the average of all zero-weighted scores will be shown in a gray font. An information button will explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade”)

Inside compute_total_score method in on_the_fly_calc.rb module, we'll check if the weighted score exists, if not then we'll calculate the non weighted score. We'll also have to add a method in the questionnaire model to get non weighted score. Few changes are required for the UI to display non-weighted score in grey with information icon and non weighted score in black.

Possible files to be modified.

models/on_the_fly_calc.rb
models/questionnaire.rb
models/view_team.html.erb

Non-zero weighted scores

If any non-zero weighted scores are available, then the score shown for the assignment should be the weighted average of the scores. For example, if Round 2 reviews are weighted 90% and author feedback is weighted 10%, and two Round 2 reviews each gave the work a score of 80%, and the only author-feedback score was 100%, then the overall score is 80%⨉90% + 100%⨉10% = 82%

We'll first write the test cases by adding test cases to the spec/models/on_the_fly_calc_spec.rb

Also, we'll test this manually by computing the score and if there's a bug then below are the possible files to be modified.

models/on_the_fly_calc.rb
models/questionnarie.rb

Review Update

If a review is submitted, and then the author(s) update the submission ​before the end of the current round​, it will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review. This functionality already exists in the existing system. However, In some submissions, there’s no change in the submitted URL’s so there’s no way to let the reviewers know that the author has made some changes in the project. We’ll add a notify button for the Author in the UI which will trigger an email to the reviewers. This will reopen the review, and then the reviewer can go in and update the submitted review. However, the previous review score that was given in the current round will count until the reviewer updates the submitted review

Sending an email when the submission parameters don’t change​:

A new button will be added on the submission page. The button will enable the author to notify all the reviewers to revisit the submission. We could also add a custom message in the mail body so that the authors know the specific reason as to why they received this particular mail.

Profile Email Preference

The checkboxes are moved to the left corner. As the layout of the options have used table layout. So, we'll increase the width of the the column which includes checkboxes. We'll be using the percentage instead of fixed value to give the width, as this would adjust the column size based on the screen size. The CSS can be applied to the layout.scss file to using the appropriate class given to the column

Test Plan

Rspec

New test cases will be added to test the functionalities which are not working as expected. Also the test cases for the additional features and changes in the system. Test cases will be added to these files:

spec/models/on_the_fly_calc_spec.rb
spec/models/questionnaire_spec.rb


Email Preferences UI Changes

The email preferences checkboxes at present appear sticking to the rightmost border of the page. By following the requirements the checkboxes will be made available closer to the text fields to make them look more user-friendly. The changes can be tested by logging into the system and on the user profile page.

Manual Testing

The instructor will create an assignment and assign the zero and non-zero weights to the rubrics.

· Login as an author and go to scores page to view the feedback.

Test – Zero weighted scores shown in gray.

Test – Zero weighted scores shown in gray.

· Zero weighted scores should be displayed in gray to the user.

Test – Information button.

· The information button should explain why the scores are shown in gray (“The reviews submitted so far do not count toward the author’s final grade.”)

Test – Display non-zero weighted scores.

· The scores shown for the assignment should be the weighted average of the scores, if any non-zero scores are available · The zero weighted scores have to be replaced with non-zero weighted scores.

Test – Update review score. · If author updates the submission before the end of current round. The reviewer should be able to go and update the submitted review. · If the review is not updated the previous review score has to be retained.

Test – Multipart part rubric. · The functionality should work for multipart rubric. · The scores calculated have to be accurate.